Dataset Viewer
url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
3.28B
| node_id
stringlengths 18
32
| number
int64 1
7.71k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[us, tz=UTC]date 2020-04-14 10:18:02
2025-07-30 11:34:53
| updated_at
timestamp[us, tz=UTC]date 2020-04-27 16:04:17
2025-07-31 05:22:35
| closed_at
timestamp[us, tz=UTC]date 2020-04-14 12:01:40
2025-07-30 14:22:21
⌀ | author_association
stringclasses 4
values | type
null | active_lock_reason
null | sub_issues_summary
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | draft
float64 0
1
⌀ | pull_request
dict | created_at_dt
timestamp[us, tz=UTC]date 2020-04-14 10:18:02
2025-07-30 11:34:53
| closed_at_dt
timestamp[us, tz=UTC]date 2020-04-14 12:01:40
2025-07-30 14:22:21
⌀ | time_to_close
duration[us] | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7700
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7700/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7700/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7700/events
|
https://github.com/huggingface/datasets/issues/7700
| 3,263,922,255 |
I_kwDODunzps7Ci4BP
| 7,700 |
[doc] map.num_proc needs clarification
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4",
"events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}",
"followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers",
"following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}",
"gists_url": "https://api.github.com/users/sfc-gh-sbekman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sfc-gh-sbekman",
"id": 196988264,
"login": "sfc-gh-sbekman",
"node_id": "U_kgDOC73NaA",
"organizations_url": "https://api.github.com/users/sfc-gh-sbekman/orgs",
"received_events_url": "https://api.github.com/users/sfc-gh-sbekman/received_events",
"repos_url": "https://api.github.com/users/sfc-gh-sbekman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sfc-gh-sbekman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sfc-gh-sbekman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sfc-gh-sbekman",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-07-25T17:35:09 | 2025-07-25T17:39:36 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc
```
num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached
shards are loaded sequentially.
```
for batch:
```
num_proc (int, optional, defaults to None): The number of processes to use for multiprocessing. If None, no
multiprocessing is used. This can significantly speed up batching for large datasets.
```
So what happens to `map.num_proc` - is it the same behavior as `batch.num_proc` - so only if `num_proc=None` then no multiprocessing is used?
Let's update the doc to be unambiguous.
**bonus**: we could make all of these behave similarly to `DataLoader.num_workers` - where `num_workers==0` implies no multiprocessing. I think that's the most intuitive, IMHO. 0 workers - the main process has to do all the work. `None` could be the same as `0`.
context: debugging a failing `map`
Thank you!
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7700/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7700/timeline
| null | null | null | null | 2025-07-25T17:35:09 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7706
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7706/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7706/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7706/events
|
https://github.com/huggingface/datasets/pull/7706
| 3,271,129,240 |
PR_kwDODunzps6hC5uD
| 7,706 |
Reimplemented partial split download support (revival of #6832)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "CONTRIBUTOR",
"body": " Mario’s Patch (in PR #6832):\r\n```\r\ndef _make_split_generators_kwargs(self, prepare_split_kwargs):\r\n # Pass `pipeline` into `_split_generators()` from `prepare_split_kwargs` if\r\n # it's in the call signature of `_split_generators()`.\r\n # This allows for global preprocessing in beam.\r\n split_generators_kwargs = {}\r\n if \"pipeline\" in inspect.signature(self._split_generators).parameters:\r\n split_generators_kwargs[\"pipeline\"] = prepare_split_kwargs[\"pipeline\"]\r\n split_generators_kwargs.update(super()._make_split_generators_kwargs(prepare_split_kwargs))\r\n return split_generators_kwargs\r\n```\r\n\r\nIn the latest main(in my fork and og repo's main):\r\n```\r\ndef _make_split_generators_kwargs(self, prepare_split_kwargs):\r\n \"\"\"Get kwargs for `self._split_generators()` from `prepare_split_kwargs`.\"\"\"\r\n splits = prepare_split_kwargs.pop(\"splits\", None)\r\n if self._supports_partial_generation():\r\n return {\"splits\": splits}\r\n return {}\r\n```\r\nIt enables passing splits into _split_generators() only for builders that support it(if i am not wrong..). So ignored Beam logic for now!",
"created_at": "2025-07-28T19:46:55Z",
"html_url": "https://github.com/huggingface/datasets/pull/7706#issuecomment-3129110776",
"id": 3129110776,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7706",
"node_id": "IC_kwDODunzps66gnD4",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3129110776/reactions"
},
"updated_at": "2025-07-28T19:47:24Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3129110776",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
}
] | 2025-07-28T19:40:40 | 2025-07-29T09:25:12 | null |
CONTRIBUTOR
| null | null | null |
(revival of #6832)
https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130
Close https://github.com/huggingface/datasets/issues/4101, and more
---
### PR under work!!!!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7706/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7706/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7706.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7706",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7706.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7706"
}
| 2025-07-28T19:40:40 | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7709
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7709/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7709/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7709/events
|
https://github.com/huggingface/datasets/issues/7709
| 3,276,677,990 |
I_kwDODunzps7DTiNm
| 7,709 |
Release 4.0.0 breaks usage patterns of with_format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
"events_url": "https://api.github.com/users/wittenator/events{/privacy}",
"followers_url": "https://api.github.com/users/wittenator/followers",
"following_url": "https://api.github.com/users/wittenator/following{/other_user}",
"gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wittenator",
"id": 9154515,
"login": "wittenator",
"node_id": "MDQ6VXNlcjkxNTQ1MTU=",
"organizations_url": "https://api.github.com/users/wittenator/orgs",
"received_events_url": "https://api.github.com/users/wittenator/received_events",
"repos_url": "https://api.github.com/users/wittenator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wittenator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wittenator",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "MEMBER",
"body": "This is a breaking change with 4.0 which introduced `Column` objects. To get the numpy array from a `Column` you can `col[i]`, `col[i:j]` or even `col[:]` if you want the full column as a numpy array:\n\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(...)\ndataset = dataset.with_format(\"numpy\")\nprint(dataset[\"star\"][:].ndim)\n```",
"created_at": "2025-07-30T15:41:59Z",
"html_url": "https://github.com/huggingface/datasets/issues/7709#issuecomment-3136883161",
"id": 3136883161,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7709",
"node_id": "IC_kwDODunzps66-QnZ",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136883161/reactions"
},
"updated_at": "2025-07-30T15:41:59Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136883161",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
}
] | 2025-07-30T11:34:53 | 2025-07-30T15:41:59 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memory dataset for which iterating would be too slow. Is this intended behaviour? I couldn't find much documentation on the intended usage of the new Column class yet.
### Steps to reproduce the bug
Steps to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1")
dataset = dataset.with_format("numpy")
print(dataset["star"].ndim)
```
### Expected behavior
Working on whole columns should be possible.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-63-generic-x86_64-with-glibc2.36
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7709/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7709/timeline
| null | null | null | null | 2025-07-30T11:34:53 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7708
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7708/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7708/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7708/events
|
https://github.com/huggingface/datasets/pull/7708
| 3,273,614,584 |
PR_kwDODunzps6hLVip
| 7,708 |
Concurrent push_to_hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "NONE",
"body": "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7708). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"created_at": "2025-07-29T13:17:07Z",
"html_url": "https://github.com/huggingface/datasets/pull/7708#issuecomment-3132501540",
"id": 3132501540,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7708",
"node_id": "IC_kwDODunzps66ti4k",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3132501540/reactions"
},
"updated_at": "2025-07-29T13:17:07Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3132501540",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/99929124?v=4",
"events_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/events{/privacy}",
"followers_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/followers",
"following_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/following{/other_user}",
"gists_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuggingFaceDocBuilderDev",
"id": 99929124,
"login": "HuggingFaceDocBuilderDev",
"node_id": "U_kgDOBfTMJA",
"organizations_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/orgs",
"received_events_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/received_events",
"repos_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuggingFaceDocBuilderDev",
"user_view_type": "public"
}
}
] | 2025-07-29T13:14:30 | 2025-07-30T15:55:00 | null |
MEMBER
| null | null | null |
Retry the step that (download + update + upload) the README.md using `create_commit(..., parent_commit=...)` if there was a commit in the meantime. This should enable concurrent `push_to_hub()` since it won't overwrite the README.md metadata anymore.
DO NOT MERGE FOR NOW since it seems there is one bug that prevents this logic from working:
I'm using parent_commit to enable concurrent push_to_hub() in datasets for a retry mechanism, but for some reason I always run into a weird situation.
Sometimes create_commit(.., parent_commit=...) returns error 500 but the commit did happen on the Hub side without respecting parent_commit
e.g. request id
```
huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/lhoestq/tmp/commit/main (Request ID: Root=1-6888d8af-2ce517bc60c69cb378b51526;d1b17993-c5d0-4ccd-9926-060c45f9ed61)
```
fix coming in [internal](https://github.com/huggingface-internal/moon-landing/pull/14617)
close https://github.com/huggingface/datasets/issues/7600
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7708/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7708/timeline
| null | null | 1 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7708.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7708",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7708.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7708"
}
| 2025-07-29T13:14:30 | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7705
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7705/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7705/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7705/events
|
https://github.com/huggingface/datasets/issues/7705
| 3,269,070,499 |
I_kwDODunzps7C2g6j
| 7,705 |
Can Not read installed dataset in dataset.load(.)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4",
"events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangChiEn/followers",
"following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}",
"gists_url": "https://api.github.com/users/HuangChiEn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuangChiEn",
"id": 52521165,
"login": "HuangChiEn",
"node_id": "MDQ6VXNlcjUyNTIxMTY1",
"organizations_url": "https://api.github.com/users/HuangChiEn/orgs",
"received_events_url": "https://api.github.com/users/HuangChiEn/received_events",
"repos_url": "https://api.github.com/users/HuangChiEn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuangChiEn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuangChiEn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuangChiEn",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "MEMBER",
"body": "You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n\n```python\ndataset = load_dataset(local_directory_path)\n```",
"created_at": "2025-07-30T15:44:26Z",
"html_url": "https://github.com/huggingface/datasets/issues/7705#issuecomment-3136890512",
"id": 3136890512,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7705",
"node_id": "IC_kwDODunzps66-SaQ",
"performed_via_github_app": null,
"reactions": {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136890512/reactions"
},
"updated_at": "2025-07-30T15:44:26Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136890512",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "> You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n> \n> dataset = load_dataset(local_directory_path)\n\nIt's good suggestion, but my server env is network restriction. It can not directly fetch data from huggingface. I spent lot of time to download and transfer it to the server.\nSo, I attempt to make load_dataset connect to my local dataset. ",
"created_at": "2025-08-05T01:22:10Z",
"html_url": "https://github.com/huggingface/datasets/issues/7705#issuecomment-3152953771",
"id": 3152953771,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7705",
"node_id": "IC_kwDODunzps677kGr",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3152953771/reactions"
},
"updated_at": "2025-08-05T01:22:10Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3152953771",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4",
"events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangChiEn/followers",
"following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}",
"gists_url": "https://api.github.com/users/HuangChiEn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuangChiEn",
"id": 52521165,
"login": "HuangChiEn",
"node_id": "MDQ6VXNlcjUyNTIxMTY1",
"organizations_url": "https://api.github.com/users/HuangChiEn/orgs",
"received_events_url": "https://api.github.com/users/HuangChiEn/received_events",
"repos_url": "https://api.github.com/users/HuangChiEn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuangChiEn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuangChiEn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuangChiEn",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "Just Solved it few day before. Will post solution later...\nalso thanks folks quick reply..",
"created_at": "2025-08-05T01:24:32Z",
"html_url": "https://github.com/huggingface/datasets/issues/7705#issuecomment-3152956713",
"id": 3152956713,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7705",
"node_id": "IC_kwDODunzps677k0p",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3152956713/reactions"
},
"updated_at": "2025-08-05T01:24:32Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3152956713",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4",
"events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}",
"followers_url": "https://api.github.com/users/HuangChiEn/followers",
"following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}",
"gists_url": "https://api.github.com/users/HuangChiEn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuangChiEn",
"id": 52521165,
"login": "HuangChiEn",
"node_id": "MDQ6VXNlcjUyNTIxMTY1",
"organizations_url": "https://api.github.com/users/HuangChiEn/orgs",
"received_events_url": "https://api.github.com/users/HuangChiEn/received_events",
"repos_url": "https://api.github.com/users/HuangChiEn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuangChiEn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuangChiEn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuangChiEn",
"user_view_type": "public"
}
}
] | 2025-07-28T09:43:54 | 2025-07-30T15:44:26 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Hi, folks, I'm newbie in huggingface dataset api.
As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset.
code snippet :
<img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" />
data path :
"/xxx/joseph/llava_ds/vlm_ds"
it contains all video clips i want!
<img width="1398" height="261" alt="Image" src="https://github.com/user-attachments/assets/bf213b66-e344-4311-97e7-bc209677ae77" />
i run the py script by
<img width="1042" height="38" alt="Image" src="https://github.com/user-attachments/assets/8b3fcee4-e1a6-41b8-bee1-91567b00d9d2" />
But bad happended, even i provide dataset path by "HF_HUB_CACHE", it still attempt to download data from remote side :
<img width="1697" height="813" alt="Image" src="https://github.com/user-attachments/assets/baa6cff1-a724-4710-a8c4-4805459deffb" />
Any suggestion will be appreciated!!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7705/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7705/timeline
| null | null | null | null | 2025-07-28T09:43:54 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7704
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7704/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7704/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7704/events
|
https://github.com/huggingface/datasets/pull/7704
| 3,265,730,177 |
PR_kwDODunzps6gwtb8
| 7,704 |
Fix map() example in datasets documentation: define tokenizer before use
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sanjaykumar030",
"id": 183703408,
"login": "Sanjaykumar030",
"node_id": "U_kgDOCvMXcA",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sanjaykumar030",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "NONE",
"body": "Hi @lhoestq, just a gentle follow-up on this doc fix PR (#7704). Let me know if any changes are needed — happy to update.\r\nHope this improvement helps users run the example without confusion!",
"created_at": "2025-08-01T13:48:35Z",
"html_url": "https://github.com/huggingface/datasets/pull/7704#issuecomment-3144658468",
"id": 3144658468,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7704",
"node_id": "IC_kwDODunzps67b64k",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3144658468/reactions"
},
"updated_at": "2025-08-01T13:48:35Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3144658468",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sanjaykumar030",
"id": 183703408,
"login": "Sanjaykumar030",
"node_id": "U_kgDOCvMXcA",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sanjaykumar030",
"user_view_type": "public"
}
}
] | 2025-07-26T14:18:17 | 2025-07-26T14:18:17 | null |
NONE
| null | null | null |
## Problem
The current datasets.Dataset.map() example in the documentation demonstrates batched processing using a tokenizer object without defining or importing it. This causes a NameError when users copy and run the example as-is, breaking the expected seamless experience.
## Correction
This PR fixes the issue by explicitly importing and initializing the tokenizer using the Transformers library (AutoTokenizer.from_pretrained("bert-base-uncased")), making the example self-contained and runnable without errors.
This will help new users understand the workflow and apply the method correctly.
Closes #7703
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7704/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7704/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7704.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7704",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7704.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7704"
}
| 2025-07-26T14:18:17 | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7701
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7701/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7701/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7701/events
|
https://github.com/huggingface/datasets/pull/7701
| 3,265,236,296 |
PR_kwDODunzps6gvJ83
| 7,701 |
Update fsspec max version to current release 2025.7.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5445560?v=4",
"events_url": "https://api.github.com/users/rootAvish/events{/privacy}",
"followers_url": "https://api.github.com/users/rootAvish/followers",
"following_url": "https://api.github.com/users/rootAvish/following{/other_user}",
"gists_url": "https://api.github.com/users/rootAvish/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rootAvish",
"id": 5445560,
"login": "rootAvish",
"node_id": "MDQ6VXNlcjU0NDU1NjA=",
"organizations_url": "https://api.github.com/users/rootAvish/orgs",
"received_events_url": "https://api.github.com/users/rootAvish/received_events",
"repos_url": "https://api.github.com/users/rootAvish/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rootAvish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rootAvish/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rootAvish",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
{
"author_association": "CONTRIBUTOR",
"body": "@lhoestq I ran the test suite locally and while some tests were failing those failures are present on the main branch too. Could you please review and trigger the CI?",
"created_at": "2025-07-26T08:02:37Z",
"html_url": "https://github.com/huggingface/datasets/pull/7701#issuecomment-3121473565",
"id": 3121473565,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7701",
"node_id": "IC_kwDODunzps66Degd",
"performed_via_github_app": null,
"reactions": {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3121473565/reactions"
},
"updated_at": "2025-07-26T08:02:37Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3121473565",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/5445560?v=4",
"events_url": "https://api.github.com/users/rootAvish/events{/privacy}",
"followers_url": "https://api.github.com/users/rootAvish/followers",
"following_url": "https://api.github.com/users/rootAvish/following{/other_user}",
"gists_url": "https://api.github.com/users/rootAvish/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rootAvish",
"id": 5445560,
"login": "rootAvish",
"node_id": "MDQ6VXNlcjU0NDU1NjA=",
"organizations_url": "https://api.github.com/users/rootAvish/orgs",
"received_events_url": "https://api.github.com/users/rootAvish/received_events",
"repos_url": "https://api.github.com/users/rootAvish/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rootAvish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rootAvish/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rootAvish",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7701). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"created_at": "2025-07-28T11:14:39Z",
"html_url": "https://github.com/huggingface/datasets/pull/7701#issuecomment-3126742159",
"id": 3126742159,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7701",
"node_id": "IC_kwDODunzps66XkyP",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3126742159/reactions"
},
"updated_at": "2025-07-28T11:14:39Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3126742159",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/99929124?v=4",
"events_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/events{/privacy}",
"followers_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/followers",
"following_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/following{/other_user}",
"gists_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuggingFaceDocBuilderDev",
"id": 99929124,
"login": "HuggingFaceDocBuilderDev",
"node_id": "U_kgDOBfTMJA",
"organizations_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/orgs",
"received_events_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/received_events",
"repos_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuggingFaceDocBuilderDev",
"user_view_type": "public"
}
}
] | 2025-07-26T06:47:59 | 2025-07-28T11:58:11 | 2025-07-28T11:58:11 |
CONTRIBUTOR
| null | null | null |
Diffusers currently asks for a max fsspec version of `2025.3.0`. This change updates it to the current latest version. This change is mainly required to resolve conflicts with other packages in an environment. In my particular case, `aider-chat` which is a part of my environment installs `2025.5.1` which is incompatible with `datasets`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7701/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7701/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7701.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7701",
"merged_at": "2025-07-28T11:58:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7701.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7701"
}
| 2025-07-26T06:47:59 | 2025-07-28T11:58:11 | 2 days, 5:10:12 | true |
https://api.github.com/repos/huggingface/datasets/issues/7703
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7703/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7703/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7703/events
|
https://github.com/huggingface/datasets/issues/7703
| 3,265,648,942 |
I_kwDODunzps7Cpdku
| 7,703 |
[Docs] map() example uses undefined `tokenizer` — causes NameError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sanjaykumar030",
"id": 183703408,
"login": "Sanjaykumar030",
"node_id": "U_kgDOCvMXcA",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sanjaykumar030",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "NONE",
"body": "I've submitted PR #7704 which adds documentation to clarify the behavior of `map()` when returning `None`.",
"created_at": "2025-07-27T05:39:18Z",
"html_url": "https://github.com/huggingface/datasets/issues/7703#issuecomment-3124002704",
"id": 3124002704,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7703",
"node_id": "IC_kwDODunzps66NH-Q",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3124002704/reactions"
},
"updated_at": "2025-07-27T05:39:18Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3124002704",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sanjaykumar030",
"id": 183703408,
"login": "Sanjaykumar030",
"node_id": "U_kgDOCvMXcA",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sanjaykumar030",
"user_view_type": "public"
}
}
] | 2025-07-26T13:35:11 | 2025-07-27T09:44:35 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Description
The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied.
Here is the problematic line:
```python
# process a batch of examples
>>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This assumes the user has already set up a tokenizer, which contradicts the goal of having self-contained, copy-paste-friendly examples.
## Problem
Users who copy and run the example as-is will encounter:
```python
NameError: name 'tokenizer' is not defined
```
This breaks the flow for users and violates HuggingFace's documentation principle that examples should "work as expected" when copied directly.
## Proposal
Update the example to include the required tokenizer setup using the Transformers library, like so:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
ds_tokenized = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This will help new users understand the workflow and apply the method correctly.
## Note
This PR complements ongoing improvements like #7700, which clarifies multiprocessing in .map(). My change focuses on undefined tokenizer — causes NameError
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7703/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7703/timeline
| null | null | null | null | 2025-07-26T13:35:11 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7707
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7707/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7707/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7707/events
|
https://github.com/huggingface/datasets/issues/7707
| 3,271,867,998 |
I_kwDODunzps7DBL5e
| 7,707 |
load_dataset() in 4.0.0 failed when decoding audio
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "NONE",
"body": "Hi @lhoestq . Would you please have a look at it? I use the official NV Docker ([NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`) on A100 and encountered this issue, but I don't know how to fix it.",
"created_at": "2025-07-29T03:27:18Z",
"html_url": "https://github.com/huggingface/datasets/issues/7707#issuecomment-3130505493",
"id": 3130505493,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7707",
"node_id": "IC_kwDODunzps66l7kV",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3130505493/reactions"
},
"updated_at": "2025-07-29T05:48:21Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3130505493",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "Use !pip install -U datasets[audio] rather than !pip install datasets\n\nI got the solution from this link [https://github.com/huggingface/datasets/issues/7678](https://github.com/huggingface/datasets/issues/7678), and it processes the data; however, it led to certain transformer importnerrors",
"created_at": "2025-07-29T13:50:33Z",
"html_url": "https://github.com/huggingface/datasets/issues/7707#issuecomment-3132627446",
"id": 3132627446,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7707",
"node_id": "IC_kwDODunzps66uBn2",
"performed_via_github_app": null,
"reactions": {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3132627446/reactions"
},
"updated_at": "2025-07-30T15:11:23Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3132627446",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/107439558?v=4",
"events_url": "https://api.github.com/users/asantewaa-bremang/events{/privacy}",
"followers_url": "https://api.github.com/users/asantewaa-bremang/followers",
"following_url": "https://api.github.com/users/asantewaa-bremang/following{/other_user}",
"gists_url": "https://api.github.com/users/asantewaa-bremang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/asantewaa-bremang",
"id": 107439558,
"login": "asantewaa-bremang",
"node_id": "U_kgDOBmdlxg",
"organizations_url": "https://api.github.com/users/asantewaa-bremang/orgs",
"received_events_url": "https://api.github.com/users/asantewaa-bremang/received_events",
"repos_url": "https://api.github.com/users/asantewaa-bremang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/asantewaa-bremang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asantewaa-bremang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/asantewaa-bremang",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "> https://github.com/huggingface/datasets/issues/7678\n\nHi @asantewaa-bremang . Thanks for your reply, but sadly it does not work for me.",
"created_at": "2025-07-30T01:06:25Z",
"html_url": "https://github.com/huggingface/datasets/issues/7707#issuecomment-3134550394",
"id": 3134550394,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7707",
"node_id": "IC_kwDODunzps661XF6",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3134550394/reactions"
},
"updated_at": "2025-07-30T01:06:25Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3134550394",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
}
},
{
"author_association": "MEMBER",
"body": "It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n\notherwise feel free to open a new issue there",
"created_at": "2025-07-30T15:13:48Z",
"html_url": "https://github.com/huggingface/datasets/issues/7707#issuecomment-3136790067",
"id": 3136790067,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7707",
"node_id": "IC_kwDODunzps66954z",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136790067/reactions"
},
"updated_at": "2025-07-30T15:13:48Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136790067",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "@jiqing-feng, are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio]. ",
"created_at": "2025-07-30T16:38:50Z",
"html_url": "https://github.com/huggingface/datasets/issues/7707#issuecomment-3137087580",
"id": 3137087580,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7707",
"node_id": "IC_kwDODunzps66_Chc",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3137087580/reactions"
},
"updated_at": "2025-07-30T16:38:50Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3137087580",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/107439558?v=4",
"events_url": "https://api.github.com/users/asantewaa-bremang/events{/privacy}",
"followers_url": "https://api.github.com/users/asantewaa-bremang/followers",
"following_url": "https://api.github.com/users/asantewaa-bremang/following{/other_user}",
"gists_url": "https://api.github.com/users/asantewaa-bremang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/asantewaa-bremang",
"id": 107439558,
"login": "asantewaa-bremang",
"node_id": "U_kgDOBmdlxg",
"organizations_url": "https://api.github.com/users/asantewaa-bremang/orgs",
"received_events_url": "https://api.github.com/users/asantewaa-bremang/received_events",
"repos_url": "https://api.github.com/users/asantewaa-bremang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/asantewaa-bremang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asantewaa-bremang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/asantewaa-bremang",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "> [@jiqing-feng](https://github.com/jiqing-feng), are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio].\n\nNo, I ran the script on the A100 instance locally.",
"created_at": "2025-07-31T01:02:58Z",
"html_url": "https://github.com/huggingface/datasets/issues/7707#issuecomment-3138255744",
"id": 3138255744,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7707",
"node_id": "IC_kwDODunzps67DfuA",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3138255744/reactions"
},
"updated_at": "2025-07-31T01:02:58Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3138255744",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "> It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n> \n> otherwise feel free to open a new issue there\n\nThanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?",
"created_at": "2025-07-31T03:00:54Z",
"html_url": "https://github.com/huggingface/datasets/issues/7707#issuecomment-3138406828",
"id": 3138406828,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7707",
"node_id": "IC_kwDODunzps67EEms",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3138406828/reactions"
},
"updated_at": "2025-07-31T03:01:09Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3138406828",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
}
},
{
"author_association": "MEMBER",
"body": "> Thanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?\n\nFor now I'd recommend using `datasets==3.6.0` if this issue is blocking for you",
"created_at": "2025-07-31T16:17:43Z",
"html_url": "https://github.com/huggingface/datasets/issues/7707#issuecomment-3140560731",
"id": 3140560731,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7707",
"node_id": "IC_kwDODunzps67MSdb",
"performed_via_github_app": null,
"reactions": {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3140560731/reactions"
},
"updated_at": "2025-07-31T16:17:43Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3140560731",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "Resolved by installing the pre-release torchcodec. Thanks!",
"created_at": "2025-08-01T05:15:45Z",
"html_url": "https://github.com/huggingface/datasets/issues/7707#issuecomment-3142202121",
"id": 3142202121,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7707",
"node_id": "IC_kwDODunzps67SjMJ",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3142202121/reactions"
},
"updated_at": "2025-08-01T05:15:45Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3142202121",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jiqing-feng",
"id": 107918818,
"login": "jiqing-feng",
"node_id": "U_kgDOBm614g",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jiqing-feng",
"user_view_type": "public"
}
}
] | 2025-07-29T03:25:03 | 2025-07-31T03:01:09 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Cannot decode audio data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
print(dataset[0]["audio"]["array"])
```
1st round run, got
```
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 172, in decode_example
raise ImportError("To support decoding audio data, please install 'torchcodec'.")
ImportError: To support decoding audio data, please install 'torchcodec'.
```
After `pip install torchcodec` and run, got
```
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/_metadata.py", line 16, in <module>
from torchcodec._core.ops import (
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 84, in <module>
load_torchcodec_shared_libraries()
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 69, in load_torchcodec_shared_libraries
raise RuntimeError(
RuntimeError: Could not load libtorchcodec. Likely causes:
1. FFmpeg is not properly installed in your environment. We support
versions 4, 5, 6 and 7.
2. The PyTorch version (2.8.0a0+5228986c39.nv25.06) is not compatible with
this version of TorchCodec. Refer to the version compatibility
table:
https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
3. Another runtime dependency; see exceptions below.
The following exceptions were raised as we tried to load libtorchcodec:
[start of libtorchcodec loading traceback]
FFmpeg version 7: libavutil.so.59: cannot open shared object file: No such file or directory
FFmpeg version 6: libavutil.so.58: cannot open shared object file: No such file or directory
FFmpeg version 5: libavutil.so.57: cannot open shared object file: No such file or directory
FFmpeg version 4: libavutil.so.56: cannot open shared object file: No such file or directory
[end of libtorchcodec loading traceback].
```
After `apt update && apt install ffmpeg -y`, got
```
Traceback (most recent call last):
File "/workspace/jiqing/test_datasets.py", line 4, in <module>
print(dataset[0]["audio"]["array"])
~~~~~~~^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 198, in decode_example
audio = AudioDecoder(bytes, stream_index=self.stream_index, sample_rate=self.sampling_rate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_audio_decoder.py", line 62, in __init__
self._decoder = create_decoder(source=source, seek_mode="approximate")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_decoder_utils.py", line 33, in create_decoder
return core.create_from_bytes(source, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 144, in create_from_bytes
return create_from_tensor(buffer, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Could not run 'torchcodec_ns::create_from_tensor' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchcodec_ns::create_from_tensor' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:214 [kernel]
BackendSelect: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /__w/torchcodec/torchcodec/pytorch/torchcodec/src/torchcodec/_core/custom_ops.cpp:694 [kernel]
FuncTorchDynamicLayerBackMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback]
Functionalize: registered at /opt/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback]
Named: registered at /opt/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback]
AutogradOther: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradCPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback]
AutogradCUDA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback]
AutogradXLA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback]
AutogradMPS: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback]
AutogradXPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradHPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:108 [backend fallback]
AutogradLazy: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback]
AutogradMTIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback]
AutogradMAIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback]
AutogradMeta: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:99 [backend fallback]
Tracer: registered at /opt/pytorch/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback]
AutocastMTIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
AutocastMAIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback]
AutocastXPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:542 [backend fallback]
AutocastMPS: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:165 [backend fallback]
FuncTorchBatched: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback]
BatchedNestedTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback]
Batched: registered at /opt/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback]
PythonTLSSnapshot: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback]
PreDispatch: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback]
PythonDispatcher: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback]
```
### Expected behavior
The result is
```
[0.00238037 0.0020752 0.00198364 ... 0.00042725 0.00057983 0.0010376 ]
```
on `datasets==3.6.0`
### Environment info
[NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`
```
- `datasets` version: 4.0.0
- Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7707/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7707/timeline
| null | null | null | null | 2025-07-29T03:25:03 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7702
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7702/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7702/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7702/events
|
https://github.com/huggingface/datasets/pull/7702
| 3,265,328,549 |
PR_kwDODunzps6gvdYC
| 7,702 |
num_proc=0 behave like None and clarify num_proc documentation in .map()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tanuj-rai",
"id": 84439872,
"login": "tanuj-rai",
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tanuj-rai",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "MEMBER",
"body": "I think we can support num_proc=0 and make it equivalent to `None` to make it simpler",
"created_at": "2025-07-30T15:46:03Z",
"html_url": "https://github.com/huggingface/datasets/pull/7702#issuecomment-3136895555",
"id": 3136895555,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7702",
"node_id": "IC_kwDODunzps66-TpD",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136895555/reactions"
},
"updated_at": "2025-07-30T15:46:03Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136895555",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7702). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"created_at": "2025-07-30T15:47:46Z",
"html_url": "https://github.com/huggingface/datasets/pull/7702#issuecomment-3136901002",
"id": 3136901002,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7702",
"node_id": "IC_kwDODunzps66-U-K",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136901002/reactions"
},
"updated_at": "2025-07-30T15:47:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136901002",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/99929124?v=4",
"events_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/events{/privacy}",
"followers_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/followers",
"following_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/following{/other_user}",
"gists_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuggingFaceDocBuilderDev",
"id": 99929124,
"login": "HuggingFaceDocBuilderDev",
"node_id": "U_kgDOBfTMJA",
"organizations_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/orgs",
"received_events_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/received_events",
"repos_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuggingFaceDocBuilderDev",
"user_view_type": "public"
}
},
{
"author_association": "CONTRIBUTOR",
"body": "> I think we can support num_proc=0 and make it equivalent to `None` to make it simpler\r\n\r\nThank you @lhoestq for reviewing it. Please let me know if anything needs to be updated further.",
"created_at": "2025-07-31T05:27:19Z",
"html_url": "https://github.com/huggingface/datasets/pull/7702#issuecomment-3138604518",
"id": 3138604518,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7702",
"node_id": "IC_kwDODunzps67E03m",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3138604518/reactions"
},
"updated_at": "2025-07-31T05:27:19Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3138604518",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tanuj-rai",
"id": 84439872,
"login": "tanuj-rai",
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tanuj-rai",
"user_view_type": "public"
}
}
] | 2025-07-26T08:19:39 | 2025-07-31T05:22:35 | null |
NONE
| null | null | null |
Fixes issue #7700
This PR makes num_proc=0 behave like None in Dataset.map(), disabling multiprocessing.
It improves UX by aligning with DataLoader(num_workers=0) behavior.
The num_proc docstring is also updated to clearly explain valid values and behavior.
@SunMarc
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7702/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7702/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7702.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7702",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7702.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7702"
}
| 2025-07-26T08:19:39 | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7697
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7697/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7697/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7697/events
|
https://github.com/huggingface/datasets/issues/7697
| 3,254,526,399 |
I_kwDODunzps7B_CG_
| 7,697 |
-
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44517413?v=4",
"events_url": "https://api.github.com/users/kakamond/events{/privacy}",
"followers_url": "https://api.github.com/users/kakamond/followers",
"following_url": "https://api.github.com/users/kakamond/following{/other_user}",
"gists_url": "https://api.github.com/users/kakamond/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kakamond",
"id": 44517413,
"login": "kakamond",
"node_id": "MDQ6VXNlcjQ0NTE3NDEz",
"organizations_url": "https://api.github.com/users/kakamond/orgs",
"received_events_url": "https://api.github.com/users/kakamond/received_events",
"repos_url": "https://api.github.com/users/kakamond/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kakamond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kakamond/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kakamond",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-07-23T01:30:32 | 2025-07-25T15:21:39 | 2025-07-25T15:21:39 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
-
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7697/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7697/timeline
| null |
completed
| null | null | 2025-07-23T01:30:32 | 2025-07-25T15:21:39 | 2 days, 13:51:07 | false |
https://api.github.com/repos/huggingface/datasets/issues/7698
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7698/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7698/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7698/events
|
https://github.com/huggingface/datasets/issues/7698
| 3,255,350,916 |
I_kwDODunzps7CCLaE
| 7,698 |
NotImplementedError when using streaming=True in Google Colab environment
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/100470741?v=4",
"events_url": "https://api.github.com/users/Aniket17200/events{/privacy}",
"followers_url": "https://api.github.com/users/Aniket17200/followers",
"following_url": "https://api.github.com/users/Aniket17200/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniket17200/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Aniket17200",
"id": 100470741,
"login": "Aniket17200",
"node_id": "U_kgDOBf0P1Q",
"organizations_url": "https://api.github.com/users/Aniket17200/orgs",
"received_events_url": "https://api.github.com/users/Aniket17200/received_events",
"repos_url": "https://api.github.com/users/Aniket17200/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Aniket17200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniket17200/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Aniket17200",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "CONTRIBUTOR",
"body": "Hi, @Aniket17200, try upgrading datasets using '!pip install -U datasets'. I hope this will resolve your issue.",
"created_at": "2025-07-23T13:46:33Z",
"html_url": "https://github.com/huggingface/datasets/issues/7698#issuecomment-3108643682",
"id": 3108643682,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7698",
"node_id": "IC_kwDODunzps65SiNi",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3108643682/reactions"
},
"updated_at": "2025-07-23T13:46:33Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3108643682",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tanuj-rai",
"id": 84439872,
"login": "tanuj-rai",
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tanuj-rai",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "Thank you @tanuj-rai, it's working great ",
"created_at": "2025-07-23T15:06:23Z",
"html_url": "https://github.com/huggingface/datasets/issues/7698#issuecomment-3109038307",
"id": 3109038307,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7698",
"node_id": "IC_kwDODunzps65UCjj",
"performed_via_github_app": null,
"reactions": {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3109038307/reactions"
},
"updated_at": "2025-07-23T15:06:23Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3109038307",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/100470741?v=4",
"events_url": "https://api.github.com/users/Aniket17200/events{/privacy}",
"followers_url": "https://api.github.com/users/Aniket17200/followers",
"following_url": "https://api.github.com/users/Aniket17200/following{/other_user}",
"gists_url": "https://api.github.com/users/Aniket17200/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Aniket17200",
"id": 100470741,
"login": "Aniket17200",
"node_id": "U_kgDOBf0P1Q",
"organizations_url": "https://api.github.com/users/Aniket17200/orgs",
"received_events_url": "https://api.github.com/users/Aniket17200/received_events",
"repos_url": "https://api.github.com/users/Aniket17200/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Aniket17200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aniket17200/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Aniket17200",
"user_view_type": "public"
}
}
] | 2025-07-23T08:04:53 | 2025-07-23T15:06:23 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after upgrading datasets and huggingface_hub and restarting the session.
### Steps to reproduce the bug
Open a new Google Colab notebook.
(Optional but recommended) Run !pip install --upgrade datasets huggingface_hub and restart the runtime.
Run the following code:
Python
from datasets import load_dataset
try:
print("Attempting to load a stream...")
streaming_dataset = load_dataset('tiiuae/falcon-refinedweb', streaming=True)
print("Success!")
except Exception as e:
print(e)
### Expected behavior
The load_dataset command should return a StreamingDataset object without raising an error, allowing iteration over the dataset.
Actual Behavior
The code fails and prints the following error traceback:
[PASTE THE FULL ERROR TRACEBACK HERE]
(Note: Copy the entire error message you received, from Traceback... to the final error line, and paste it in this section.)
### Environment info
Platform: Google Colab
datasets version: [Run !pip show datasets in Colab and paste the version here]
huggingface_hub version: [Run !pip show huggingface_hub and paste the version here]
Python version: [Run !python --version and paste the version here]
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7698/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7698/timeline
| null | null | null | null | 2025-07-23T08:04:53 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7691
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7691/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7691/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7691/events
|
https://github.com/huggingface/datasets/issues/7691
| 3,245,547,170 |
I_kwDODunzps7Bcx6i
| 7,691 |
Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "NONE",
"body": "It seems the error occurs right here, as it tries to infer the Features: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L90",
"created_at": "2025-07-19T18:44:34Z",
"html_url": "https://github.com/huggingface/datasets/issues/7691#issuecomment-3092508759",
"id": 3092508759,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7691",
"node_id": "IC_kwDODunzps64U_BX",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3092508759/reactions"
},
"updated_at": "2025-07-19T18:44:34Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3092508759",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "It seems to me that if we have something that is so large that it cannot fit in pa.table, the fallback method should be to just set it as \"binary\" type, perhaps?",
"created_at": "2025-07-19T20:19:14Z",
"html_url": "https://github.com/huggingface/datasets/issues/7691#issuecomment-3092552369",
"id": 3092552369,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7691",
"node_id": "IC_kwDODunzps64VJqx",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3092552369/reactions"
},
"updated_at": "2025-07-19T20:19:14Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3092552369",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "I also tried creating a dataset_info.json but the webdataset builder didn't seem to look for it and load it",
"created_at": "2025-07-19T21:03:08Z",
"html_url": "https://github.com/huggingface/datasets/issues/7691#issuecomment-3092570641",
"id": 3092570641,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7691",
"node_id": "IC_kwDODunzps64VOIR",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3092570641/reactions"
},
"updated_at": "2025-07-19T21:03:08Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3092570641",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "Workaround on my end, removed all videos larger than 2GB for now. The dataset no longer crashes.",
"created_at": "2025-07-21T19:17:33Z",
"html_url": "https://github.com/huggingface/datasets/issues/7691#issuecomment-3098061246",
"id": 3098061246,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7691",
"node_id": "IC_kwDODunzps64qKm-",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3098061246/reactions"
},
"updated_at": "2025-07-21T19:17:33Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3098061246",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "Potential patch to webdataset.py could be like so: \n```python\nLARGE_THRESHOLD = 2 * 1024 * 1024 * 1024 # 2 GB\nlarge_fields = set()\n\n# Replace large binary fields with None for schema inference\nprocessed_examples = []\nfor example in first_examples:\n new_example = {}\n for k, v in example.items():\n if isinstance(v, bytes) and len(v) > LARGE_THRESHOLD:\n large_fields.add(k)\n new_example[k] = None # Replace with None to avoid Arrow errors\n else:\n new_example[k] = v\n processed_examples.append(new_example)\n\n# Proceed to infer schema\npa_tables = [\n pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True))\n for example in processed_examples\n]\ninferred_arrow_schema = pa.concat_tables(pa_tables, promote_options=\"default\").schema\n\n# Patch features to reflect large_binary\nfeatures = datasets.Features.from_arrow_schema(inferred_arrow_schema)\nfor field in large_fields:\n features[field] = datasets.Value(\"large_binary\")\n\n```",
"created_at": "2025-07-25T08:51:10Z",
"html_url": "https://github.com/huggingface/datasets/issues/7691#issuecomment-3116952116",
"id": 3116952116,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7691",
"node_id": "IC_kwDODunzps65yOo0",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3116952116/reactions"
},
"updated_at": "2025-07-25T08:51:10Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3116952116",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
}
] | 2025-07-19T18:40:27 | 2025-07-25T08:51:10 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming.
I made a config for the dataset that specifically includes just one problem shard, and the error triggers the instant you even run load_dataset(), even with streaming=True
```
ds = load_dataset("bible-nlp/sign-bibles", "ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard", streaming=True, split="train")
```
This gives:
```
File "/opt/home/cleong/projects/semantic_and_visual_similarity/sign-bibles-dataset/sign_bibles_dataset/tasks/test_iteration.py", line 13, in iterate_keys
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/load.py", line 1409, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/builder.py", line 1225, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators
pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True))
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 2046, in pyarrow.lib._Tabular.from_pylist
File "pyarrow/table.pxi", line 6431, in pyarrow.lib._from_pylist
File "pyarrow/table.pxi", line 4893, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1607, in pyarrow.lib._sanitize_arrays
File "pyarrow/table.pxi", line 1588, in pyarrow.lib._schema_from_arrays
File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
File "pyarrow/array.pxi", line 45, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 3980158992
```
### Steps to reproduce the bug
```python
#!/usr/bin/env python
import argparse
from datasets import get_dataset_config_names, load_dataset
from tqdm import tqdm
from pyarrow.lib import ArrowCapacityError, ArrowInvalid
def iterate_keys(language_subset: str) -> None:
"""Iterate over all samples in the Sign Bibles dataset and print idx and sample key."""
# https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/loading_methods#datasets.load_dataset
ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train")
print(f"\n==> Loaded dataset config '{language_subset}'")
idx = 0
estimated_shard_index = 0
samples_per_shard = 5
with tqdm(desc=f"{language_subset} samples") as pbar:
iterator = iter(ds)
while True:
try:
if idx % samples_per_shard == 0 and idx > 0: # 5 samples per shard: 0, 1, 2, 3, 4
print(f"Estimated Shard idx (starting at 0, {samples_per_shard}/shard): {estimated_shard_index}")
estimated_shard_index += 1
sample = next(iterator)
sample_key = sample.get("__key__", "missing-key")
print(f"[{language_subset}] idx={idx}, key={sample_key}")
idx += 1
pbar.update(1)
except StopIteration:
print(f"Finished iterating through {idx} samples of {language_subset}")
break
except (ArrowCapacityError, ArrowInvalid) as e:
print(f"PyArrow error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
except KeyError as e:
print(f"Missing key error on idx={idx}, config={language_subset}: {e}")
idx += 1
pbar.update(1)
continue
def main():
configs = get_dataset_config_names("bible-nlp/sign-bibles")
print(f"Available configs: {configs}")
configs = [
"ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard"
]
for language_subset in configs:
print(f"TESTING CONFIG {language_subset}")
iterate_keys(language_subset)
# try:
# except (ArrowCapacityError, ArrowInvalid) as e:
# print(f"PyArrow error at config level for {language_subset}: {e}")
# continue
# except RuntimeError as e:
# print(f"RuntimeError at config level for {language_subset}: {e}")
# continue
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Iterate through Sign Bibles dataset and print sample keys.")
args = parser.parse_args()
main()
```
### Expected behavior
I expect, when I load with streaming=True, that there should not be any data loaded or anything like that.
https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset says that with streaming=true,
I did expect to have some trouble with large files, but that the streaming mode would not actually try to load them unless requested, e.g. with sample["mp4"]
>In the streaming case:
> Don’t download or cache anything. Instead, the dataset is lazily loaded and will be streamed on-the-fly when iterating on it.
### Environment info
Local setup: Conda environment on Ubuntu, pip list includes the following
datasets 4.0.0
pyarrow 20.0.0
Verified on Colab: https://colab.research.google.com/drive/1HdN8stlROWrLSYXUoNeV0vQ9pClhIVM8?usp=sharing, though there it crashes by using up all available RAM
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7691/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7691/timeline
| null | null | null | null | 2025-07-19T18:40:27 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7699
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7699/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7699/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7699/events
|
https://github.com/huggingface/datasets/issues/7699
| 3,261,053,171 |
I_kwDODunzps7CX7jz
| 7,699 |
Broken link in documentation for "Create a video dataset"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
"events_url": "https://api.github.com/users/cleong110/events{/privacy}",
"followers_url": "https://api.github.com/users/cleong110/followers",
"following_url": "https://api.github.com/users/cleong110/following{/other_user}",
"gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cleong110",
"id": 122366389,
"login": "cleong110",
"node_id": "U_kgDOB0sptQ",
"organizations_url": "https://api.github.com/users/cleong110/orgs",
"received_events_url": "https://api.github.com/users/cleong110/received_events",
"repos_url": "https://api.github.com/users/cleong110/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cleong110",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "MEMBER",
"body": "The URL is ok but it seems the webdataset website is down. There seems to be a related issue here: https://github.com/webdataset/webdataset/issues/155\n\nFeel free to ask the authors there for an update. Otherwise happy to witch the link to the mirror shared in that issue",
"created_at": "2025-07-25T15:27:47Z",
"html_url": "https://github.com/huggingface/datasets/issues/7699#issuecomment-3118631690",
"id": 3118631690,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7699",
"node_id": "IC_kwDODunzps654osK",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118631690/reactions"
},
"updated_at": "2025-07-25T15:27:47Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118631690",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
}
] | 2025-07-24T19:46:28 | 2025-07-25T15:27:47 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" />
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7699/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7699/timeline
| null | null | null | null | 2025-07-24T19:46:28 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7692
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7692/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7692/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7692/events
|
https://github.com/huggingface/datasets/issues/7692
| 3,246,268,635 |
I_kwDODunzps7BfiDb
| 7,692 |
xopen: invalid start byte for streaming dataset with trust_remote_code=True
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4",
"events_url": "https://api.github.com/users/sedol1339/events{/privacy}",
"followers_url": "https://api.github.com/users/sedol1339/followers",
"following_url": "https://api.github.com/users/sedol1339/following{/other_user}",
"gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sedol1339",
"id": 5188731,
"login": "sedol1339",
"node_id": "MDQ6VXNlcjUxODg3MzE=",
"organizations_url": "https://api.github.com/users/sedol1339/orgs",
"received_events_url": "https://api.github.com/users/sedol1339/received_events",
"repos_url": "https://api.github.com/users/sedol1339/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sedol1339",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "MEMBER",
"body": "Hi ! it would be cool to convert this dataset to Parquet. This will make it work for `datasets>=4.0`, enable the Dataset Viewer and make it more reliable to load/stream (currently it uses a loading script in python and those are known for having issues sometimes)\n\nusing `datasets==3.6.0`, here is the command to convert it and open a Pull Request:\n\n```\ndatasets-cli convert_to_parquet espnet/yodas2 --trust_remote_code\n```\n\nThough it's likely that the `UnicodeDecodeError` comes from the loading script. If the script has a bug, it must be fixed to be able to convert the dataset without errors",
"created_at": "2025-07-25T14:38:54Z",
"html_url": "https://github.com/huggingface/datasets/issues/7692#issuecomment-3118172600",
"id": 3118172600,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7692",
"node_id": "IC_kwDODunzps6524m4",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118172600/reactions"
},
"updated_at": "2025-07-25T14:38:54Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118172600",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
}
] | 2025-07-20T11:08:20 | 2025-07-25T14:38:54 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am trying to load YODAS2 dataset with datasets==3.6.0
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True)))
```
And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte`
The cause of the error is the following:
```
from datasets.utils.file_utils import xopen
filepath = 'https://huggingface.co/datasets/espnet/yodas2/resolve/c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json'
xopen(filepath, 'r').read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
And the cause of this is the following:
```
import fsspec
fsspec.open(
'hf://datasets/espnet/yodas2@c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json',
mode='r',
hf={'token': None, 'endpoint': 'https://huggingface.co'},
).open().read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
Is it true that streaming=True loading is not supported anymore for trust_remote_code=True, even with datasets==3.6.0? This breaks backward compatibility.
### Steps to reproduce the bug
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True)))
```
### Expected behavior
No errors expected
### Environment info
datasets==3.6.0, ubuntu 24.04
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7692/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7692/timeline
| null | null | null | null | 2025-07-20T11:08:20 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7694
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7694/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7694/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7694/events
|
https://github.com/huggingface/datasets/issues/7694
| 3,247,600,408 |
I_kwDODunzps7BknMY
| 7,694 |
Dataset.to_json consumes excessive memory, appears to not be a streaming operation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49603999?v=4",
"events_url": "https://api.github.com/users/ycq0125/events{/privacy}",
"followers_url": "https://api.github.com/users/ycq0125/followers",
"following_url": "https://api.github.com/users/ycq0125/following{/other_user}",
"gists_url": "https://api.github.com/users/ycq0125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ycq0125",
"id": 49603999,
"login": "ycq0125",
"node_id": "MDQ6VXNlcjQ5NjAzOTk5",
"organizations_url": "https://api.github.com/users/ycq0125/orgs",
"received_events_url": "https://api.github.com/users/ycq0125/received_events",
"repos_url": "https://api.github.com/users/ycq0125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ycq0125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ycq0125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ycq0125",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "MEMBER",
"body": "Hi ! to_json is memory efficient and writes the data by batch:\n\nhttps://github.com/huggingface/datasets/blob/d9861d86be222884dabbd534a2db770c70c9b558/src/datasets/io/json.py#L153-L159\n\nWhat memory are you mesuring ? If you are mesuring RSS, it is likely that it counts the memory mapped data of the dataset. Memory mapped data are loaded as physical memory when accessed and are automatically discarded when your OS needs more memory, and therefore doesn't OOM.",
"created_at": "2025-07-25T14:42:21Z",
"html_url": "https://github.com/huggingface/datasets/issues/7694#issuecomment-3118206286",
"id": 3118206286,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7694",
"node_id": "IC_kwDODunzps653A1O",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118206286/reactions"
},
"updated_at": "2025-07-25T14:42:21Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118206286",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
}
] | 2025-07-21T07:51:25 | 2025-07-25T14:42:21 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior is unexpected, as the JSONL format is line-oriented and ideally suited for streaming writes. This issue can easily lead to Out-of-Memory (OOM) errors when exporting large datasets, especially in memory-constrained environments like Docker containers.
<img width="1343" height="329" alt="Image" src="https://github.com/user-attachments/assets/518b4263-ad12-422d-9672-28ffe97240ce" />
### Steps to reproduce the bug
```
import os
from datasets import load_dataset, Dataset
from loguru import logger
# A public dataset to test with
REPO_ID = "adam89/TinyStoriesChinese"
SUBSET = "default"
SPLIT = "train"
NUM_ROWS_TO_LOAD = 10 # Use a reasonably large number to see the memory spike
def run_test():
"""Loads data into memory and then saves it, triggering the memory issue."""
logger.info("Step 1: Loading data into an in-memory Dataset object...")
# Create an in-memory Dataset object from a stream
# This simulates having a processed dataset ready to be saved
iterable_dataset = load_dataset(REPO_ID, name=SUBSET, split=SPLIT, streaming=True)
limited_stream = iterable_dataset.take(NUM_ROWS_TO_LOAD)
in_memory_dataset = Dataset.from_generator(limited_stream.__iter__)
logger.info(f"Dataset with {len(in_memory_dataset)} rows created in memory.")
output_path = "./test_output.jsonl"
logger.info(f"Step 2: Saving the dataset to {output_path} using .to_json()...")
logger.info("Please monitor memory usage during this step.")
# This is the step that causes the massive memory allocation
in_memory_dataset.to_json(output_path, force_ascii=False)
logger.info("Save operation complete.")
os.remove(output_path)
if __name__ == "__main__":
# To see the memory usage clearly, run this script with a memory profiler:
# python -m memray run your_script_name.py
# python -m memray tree xxx.bin
run_test()
```
### Expected behavior
I would expect the .to_json(lines=True) method to be a memory-efficient, streaming operation. The memory usage should remain low and relatively constant, as data is converted and written to the file line-by-line or in small batches. The memory footprint should not be proportional to the total number of rows in the in_memory_dataset.
### Environment info
datasets version:3.6.0
Python version:3.9.18
os:macOS 15.3.1 (arm64)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7694/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7694/timeline
| null | null | null | null | 2025-07-21T07:51:25 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7695
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7695/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7695/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7695/events
|
https://github.com/huggingface/datasets/pull/7695
| 3,251,904,843 |
PR_kwDODunzps6gB7jS
| 7,695 |
Support downloading specific splits in load_dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
{
"author_association": "CONTRIBUTOR",
"body": "I’ve completed the following steps to continue the partial split download support (from PR #6832):\r\n\r\nI did changes on top of what has been done by mario. Here are some of those changes: \r\n- Restored support for writing multiple split shards:\r\n\r\n- In _prepare_split_single, we now correctly replace JJJJJ and SSSSS placeholders in the fpath for job/shard IDs before creating the writer.\r\n\r\n- Added os.makedirs(os.path.dirname(path), exist_ok=True) after placeholder substitution to prevent FileNotFoundError.\r\n\r\n- Applied the fix to both split writers:\r\n\r\n 1] self._generate_examples version (used by most modules).\r\n\r\n 2] self._generate_tables version (used by IterableDatasetBuilder).\r\n\r\n- Confirmed 109/113 tests passing, meaning the general logic is working across the board.\r\n\r\nWhat’s still failing\r\n4 integration tests fail:\r\n\r\n`test_load_hub_dataset_with_single_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_two_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_metadata_config_in_parallel`\r\n\r\n`test_reload_old_cache_from_2_15`\r\n\r\nAll are due to FileNotFoundError from uncreated output paths, which I'm currently finalizing by ensuring os.makedirs() is correctly applied before every writer instantiation.\r\n\r\nI will update about these fixes after running tests!",
"created_at": "2025-07-22T09:38:59Z",
"html_url": "https://github.com/huggingface/datasets/pull/7695#issuecomment-3101895080",
"id": 3101895080,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7695",
"node_id": "IC_kwDODunzps644ymo",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3101895080/reactions"
},
"updated_at": "2025-07-24T07:17:34Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3101895080",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
},
{
"author_association": "CONTRIBUTOR",
"body": "@lhoestq this was just an update",
"created_at": "2025-07-24T07:17:57Z",
"html_url": "https://github.com/huggingface/datasets/pull/7695#issuecomment-3112330175",
"id": 3112330175,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7695",
"node_id": "IC_kwDODunzps65gmO_",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3112330175/reactions"
},
"updated_at": "2025-07-24T07:17:57Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3112330175",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7695). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"created_at": "2025-07-28T14:19:21Z",
"html_url": "https://github.com/huggingface/datasets/pull/7695#issuecomment-3127461721",
"id": 3127461721,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7695",
"node_id": "IC_kwDODunzps66aUdZ",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3127461721/reactions"
},
"updated_at": "2025-07-28T14:19:21Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3127461721",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/99929124?v=4",
"events_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/events{/privacy}",
"followers_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/followers",
"following_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/following{/other_user}",
"gists_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HuggingFaceDocBuilderDev",
"id": 99929124,
"login": "HuggingFaceDocBuilderDev",
"node_id": "U_kgDOBfTMJA",
"organizations_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/orgs",
"received_events_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/received_events",
"repos_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuggingFaceDocBuilderDev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HuggingFaceDocBuilderDev",
"user_view_type": "public"
}
},
{
"author_association": "CONTRIBUTOR",
"body": "Local DIR wasn't doing well, dk actually what happened, will PR again! Sorry :)",
"created_at": "2025-07-28T17:23:08Z",
"html_url": "https://github.com/huggingface/datasets/pull/7695#issuecomment-3128245583",
"id": 3128245583,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7695",
"node_id": "IC_kwDODunzps66dT1P",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3128245583/reactions"
},
"updated_at": "2025-07-28T17:33:30Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3128245583",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
}
] | 2025-07-22T09:33:54 | 2025-07-28T17:33:30 | 2025-07-28T17:15:45 |
CONTRIBUTOR
| null | null | null |
This PR builds on #6832 by @mariosasko.
May close - #4101, #2538
Discussion - https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130
---
### Note - This PR is under work and frequent changes will be pushed.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7695/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7695/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7695.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7695",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7695.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7695"
}
| 2025-07-22T09:33:54 | 2025-07-28T17:15:45 | 6 days, 7:41:51 | true |
https://api.github.com/repos/huggingface/datasets/issues/7690
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7690/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7690/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7690/events
|
https://github.com/huggingface/datasets/pull/7690
| 3,244,380,691 |
PR_kwDODunzps6fozag
| 7,690 |
HDF5 support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klamike",
"id": 17013474,
"login": "klamike",
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"repos_url": "https://api.github.com/users/klamike/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klamike",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "NONE",
"body": "A few to-dos which I think can be left for future PRs (which I am happy to do/help with -- just this one is already huge 😄 ):\r\n- [Enum types](https://docs.h5py.org/en/stable/special.html#enumerated-types)\r\n- HDF5 [io](https://github.com/huggingface/datasets/tree/main/src/datasets/io)\r\n- [dataset-viewer](https://github.com/huggingface/dataset-viewer) support (not sure if changes are needed with the way it is written now)",
"created_at": "2025-07-23T02:11:02Z",
"html_url": "https://github.com/huggingface/datasets/pull/7690#issuecomment-3105391677",
"id": 3105391677,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7690",
"node_id": "IC_kwDODunzps65GIQ9",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3105391677/reactions"
},
"updated_at": "2025-07-23T02:54:11Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3105391677",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klamike",
"id": 17013474,
"login": "klamike",
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"repos_url": "https://api.github.com/users/klamike/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klamike",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "@lhoestq any interest in merging this? Let me know if I can do anything to make reviewing it easier!",
"created_at": "2025-07-25T15:22:21Z",
"html_url": "https://github.com/huggingface/datasets/pull/7690#issuecomment-3118570910",
"id": 3118570910,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7690",
"node_id": "IC_kwDODunzps654Z2e",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118570910/reactions"
},
"updated_at": "2025-07-25T15:22:21Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118570910",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klamike",
"id": 17013474,
"login": "klamike",
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"repos_url": "https://api.github.com/users/klamike/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klamike",
"user_view_type": "public"
}
}
] | 2025-07-18T21:09:41 | 2025-07-28T21:32:12 | null |
NONE
| null | null | null |
This PR adds support for tabular HDF5 file(s) by converting each row to an Arrow table. It supports columns with the usual dtypes including up to 5-dimensional arrays as well as support for complex/compound types by splitting them into several columns. All datasets within the HDF5 file should have rows on the first dimension (groups/subgroups are still allowed). Closes #3113.
Replaces #7625 which only supports a relatively small subset of HDF5.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7690/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7690/timeline
| null | null | 0 |
{
"diff_url": "https://github.com/huggingface/datasets/pull/7690.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7690",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7690.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7690"
}
| 2025-07-18T21:09:41 | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7693
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7693/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7693/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7693/events
|
https://github.com/huggingface/datasets/issues/7693
| 3,246,369,678 |
I_kwDODunzps7Bf6uO
| 7,693 |
Dataset scripts are no longer supported, but found superb.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/114297534?v=4",
"events_url": "https://api.github.com/users/edwinzajac/events{/privacy}",
"followers_url": "https://api.github.com/users/edwinzajac/followers",
"following_url": "https://api.github.com/users/edwinzajac/following{/other_user}",
"gists_url": "https://api.github.com/users/edwinzajac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/edwinzajac",
"id": 114297534,
"login": "edwinzajac",
"node_id": "U_kgDOBtAKvg",
"organizations_url": "https://api.github.com/users/edwinzajac/orgs",
"received_events_url": "https://api.github.com/users/edwinzajac/received_events",
"repos_url": "https://api.github.com/users/edwinzajac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/edwinzajac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwinzajac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/edwinzajac",
"user_view_type": "public"
}
|
[] |
open
| false | null |
[] | null |
[
{
"author_association": "NONE",
"body": "I got a pretty similar issue when I try to load bigbio/neurotrial_ner dataset. \n`Dataset scripts are no longer supported, but found neurotrial_ner.py`",
"created_at": "2025-07-21T14:10:07Z",
"html_url": "https://github.com/huggingface/datasets/issues/7693#issuecomment-3096949005",
"id": 3096949005,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7693",
"node_id": "IC_kwDODunzps64l7EN",
"performed_via_github_app": null,
"reactions": {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3096949005/reactions"
},
"updated_at": "2025-07-21T14:10:07Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3096949005",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/6295808?v=4",
"events_url": "https://api.github.com/users/dejokz/events{/privacy}",
"followers_url": "https://api.github.com/users/dejokz/followers",
"following_url": "https://api.github.com/users/dejokz/following{/other_user}",
"gists_url": "https://api.github.com/users/dejokz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dejokz",
"id": 6295808,
"login": "dejokz",
"node_id": "MDQ6VXNlcjYyOTU4MDg=",
"organizations_url": "https://api.github.com/users/dejokz/orgs",
"received_events_url": "https://api.github.com/users/dejokz/received_events",
"repos_url": "https://api.github.com/users/dejokz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dejokz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dejokz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dejokz",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "Same here. I was running this tutorial and got a similar error: https://github.com/openai/whisper/discussions/654 (I'm a first-time transformers library user)\n\nRuntimeError: Dataset scripts are no longer supported, but found librispeech_asr.py\n\nWhat am I supposed to do at this point?\n\nThanks",
"created_at": "2025-07-22T10:43:47Z",
"html_url": "https://github.com/huggingface/datasets/issues/7693#issuecomment-3102186067",
"id": 3102186067,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7693",
"node_id": "IC_kwDODunzps6455pT",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3102186067/reactions"
},
"updated_at": "2025-07-22T10:44:03Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3102186067",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/45080861?v=4",
"events_url": "https://api.github.com/users/gMontoyaSpeech/events{/privacy}",
"followers_url": "https://api.github.com/users/gMontoyaSpeech/followers",
"following_url": "https://api.github.com/users/gMontoyaSpeech/following{/other_user}",
"gists_url": "https://api.github.com/users/gMontoyaSpeech/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gMontoyaSpeech",
"id": 45080861,
"login": "gMontoyaSpeech",
"node_id": "MDQ6VXNlcjQ1MDgwODYx",
"organizations_url": "https://api.github.com/users/gMontoyaSpeech/orgs",
"received_events_url": "https://api.github.com/users/gMontoyaSpeech/received_events",
"repos_url": "https://api.github.com/users/gMontoyaSpeech/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gMontoyaSpeech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gMontoyaSpeech/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gMontoyaSpeech",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "hey I got the same error and I have tried to downgrade version to 3.6.0 and it works.\n`pip install datasets==3.6.0`",
"created_at": "2025-07-22T15:27:21Z",
"html_url": "https://github.com/huggingface/datasets/issues/7693#issuecomment-3103380232",
"id": 3103380232,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7693",
"node_id": "IC_kwDODunzps64-dMI",
"performed_via_github_app": null,
"reactions": {
"+1": 4,
"-1": 0,
"confused": 2,
"eyes": 2,
"heart": 12,
"hooray": 2,
"laugh": 0,
"rocket": 2,
"total_count": 24,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3103380232/reactions"
},
"updated_at": "2025-07-22T15:27:21Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3103380232",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/202590134?v=4",
"events_url": "https://api.github.com/users/Tin-viAct/events{/privacy}",
"followers_url": "https://api.github.com/users/Tin-viAct/followers",
"following_url": "https://api.github.com/users/Tin-viAct/following{/other_user}",
"gists_url": "https://api.github.com/users/Tin-viAct/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tin-viAct",
"id": 202590134,
"login": "Tin-viAct",
"node_id": "U_kgDODBNHtg",
"organizations_url": "https://api.github.com/users/Tin-viAct/orgs",
"received_events_url": "https://api.github.com/users/Tin-viAct/received_events",
"repos_url": "https://api.github.com/users/Tin-viAct/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tin-viAct/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tin-viAct/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tin-viAct",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "Thank you very much @Tin-viAct . That indeed did the trick for me :) \nNow the code continue its normal flow ",
"created_at": "2025-07-22T17:11:00Z",
"html_url": "https://github.com/huggingface/datasets/issues/7693#issuecomment-3103924102",
"id": 3103924102,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7693",
"node_id": "IC_kwDODunzps65Ah-G",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 1,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3103924102/reactions"
},
"updated_at": "2025-07-22T17:11:00Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3103924102",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/45080861?v=4",
"events_url": "https://api.github.com/users/gMontoyaSpeech/events{/privacy}",
"followers_url": "https://api.github.com/users/gMontoyaSpeech/followers",
"following_url": "https://api.github.com/users/gMontoyaSpeech/following{/other_user}",
"gists_url": "https://api.github.com/users/gMontoyaSpeech/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gMontoyaSpeech",
"id": 45080861,
"login": "gMontoyaSpeech",
"node_id": "MDQ6VXNlcjQ1MDgwODYx",
"organizations_url": "https://api.github.com/users/gMontoyaSpeech/orgs",
"received_events_url": "https://api.github.com/users/gMontoyaSpeech/received_events",
"repos_url": "https://api.github.com/users/gMontoyaSpeech/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gMontoyaSpeech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gMontoyaSpeech/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gMontoyaSpeech",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "Thanks @Tin-viAct, Works!",
"created_at": "2025-07-24T14:25:36Z",
"html_url": "https://github.com/huggingface/datasets/issues/7693#issuecomment-3113684351",
"id": 3113684351,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7693",
"node_id": "IC_kwDODunzps65lw1_",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3113684351/reactions"
},
"updated_at": "2025-07-24T14:25:36Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3113684351",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/80170218?v=4",
"events_url": "https://api.github.com/users/johnbarb71/events{/privacy}",
"followers_url": "https://api.github.com/users/johnbarb71/followers",
"following_url": "https://api.github.com/users/johnbarb71/following{/other_user}",
"gists_url": "https://api.github.com/users/johnbarb71/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/johnbarb71",
"id": 80170218,
"login": "johnbarb71",
"node_id": "MDQ6VXNlcjgwMTcwMjE4",
"organizations_url": "https://api.github.com/users/johnbarb71/orgs",
"received_events_url": "https://api.github.com/users/johnbarb71/received_events",
"repos_url": "https://api.github.com/users/johnbarb71/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/johnbarb71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnbarb71/subscriptions",
"type": "User",
"url": "https://api.github.com/users/johnbarb71",
"user_view_type": "public"
}
},
{
"author_association": "MEMBER",
"body": "I converted [openslr/librispeech_asr](https://huggingface.co/datasets/openslr/librispeech_asr) to Parquet - thanks for reporting.\n\nIt's now compatible with `datasets` 4.0 !\n\nI'll try to ping the authors of the other datasets like [s3prl/superb](https://huggingface.co/datasets/s3prl/superb) and [espnet/yodas2](https://huggingface.co/datasets/espnet/yodas2)",
"created_at": "2025-07-25T15:16:36Z",
"html_url": "https://github.com/huggingface/datasets/issues/7693#issuecomment-3118509647",
"id": 3118509647,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7693",
"node_id": "IC_kwDODunzps654K5P",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118509647/reactions"
},
"updated_at": "2025-07-25T15:19:13Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118509647",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "How come a breaking change was allowed and now requires extra work from individual authors for things to be usable? \n\nhttps://en.wikipedia.org/wiki/Backward_compatibility",
"created_at": "2025-07-29T10:08:22Z",
"html_url": "https://github.com/huggingface/datasets/issues/7693#issuecomment-3131714366",
"id": 3131714366,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7693",
"node_id": "IC_kwDODunzps66qis-",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3131714366/reactions"
},
"updated_at": "2025-07-29T10:08:22Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3131714366",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/27398253?v=4",
"events_url": "https://api.github.com/users/pgzmnk/events{/privacy}",
"followers_url": "https://api.github.com/users/pgzmnk/followers",
"following_url": "https://api.github.com/users/pgzmnk/following{/other_user}",
"gists_url": "https://api.github.com/users/pgzmnk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pgzmnk",
"id": 27398253,
"login": "pgzmnk",
"node_id": "MDQ6VXNlcjI3Mzk4MjUz",
"organizations_url": "https://api.github.com/users/pgzmnk/orgs",
"received_events_url": "https://api.github.com/users/pgzmnk/received_events",
"repos_url": "https://api.github.com/users/pgzmnk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pgzmnk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pgzmnk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pgzmnk",
"user_view_type": "public"
}
},
{
"author_association": "MEMBER",
"body": "We follow semantic versioning so that breaking changes only occur in major releases. Also note that dataset scripts have been legacy for some time now, with a message on the dataset pages to ask authors to update their datasets.\n\nIt's ok to ping older versions of `datasets`, but imo a few remaining datasets need to be converted since they are valuable to the community.",
"created_at": "2025-07-30T15:01:03Z",
"html_url": "https://github.com/huggingface/datasets/issues/7693#issuecomment-3136740791",
"id": 3136740791,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7693",
"node_id": "IC_kwDODunzps669t23",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136740791/reactions"
},
"updated_at": "2025-07-30T15:01:03Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136740791",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
}
] | 2025-07-20T13:48:06 | 2025-07-30T15:01:03 | null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hello,
I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions.
I then get the error :
```
--------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[65], [line 1](vscode-notebook-cell:?execution_count=65&line=1)
----> [1](vscode-notebook-cell:?execution_count=65&line=1) dataset = datasets.load_dataset("superb", name="asr", split="test")
3 # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
4 # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
5 for out in tqdm(pipe(KeyDataset(dataset, "file"))):
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1387 verification_mode = VerificationMode(
1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
1389 )
1391 # Create a dataset builder
-> [1392](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392) builder_instance = load_dataset_builder(
1393 path=path,
1394 name=name,
1395 data_dir=data_dir,
1396 data_files=data_files,
1397 cache_dir=cache_dir,
1398 features=features,
1399 download_config=download_config,
1400 download_mode=download_mode,
1401 revision=revision,
1402 token=token,
1403 storage_options=storage_options,
1404 **config_kwargs,
1405 )
1407 # Return iterable dataset in case of streaming
1408 if streaming:
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs)
1130 if features is not None:
1131 features = _fix_for_backward_compatible_features(features)
-> [1132](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132) dataset_module = dataset_module_factory(
1133 path,
1134 revision=revision,
1135 download_config=download_config,
1136 download_mode=download_mode,
1137 data_dir=data_dir,
1138 data_files=data_files,
1139 cache_dir=cache_dir,
1140 )
1141 # Get dataset builder class
1142 builder_kwargs = dataset_module.builder_kwargs
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
1026 if isinstance(e1, FileNotFoundError):
1027 raise FileNotFoundError(
1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1030 ) from None
-> [1031](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031) raise e1 from None
1032 else:
1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.")
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
981 try:
982 api.hf_hub_download(
983 repo_id=path,
984 filename=filename,
(...) 987 proxies=download_config.proxies,
988 )
--> [989](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989) raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
990 except EntryNotFoundError:
991 # Use the infos from the parquet export except in some cases:
992 if data_dir or data_files or (revision and revision != "main"):
RuntimeError: Dataset scripts are no longer supported, but found superb.py
```
NB : I tried to replace "superb" by "anton-l/superb_demo" but I get a 'torchcodec' importing error. Maybe I misunderstood something.
### Steps to reproduce the bug
```
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
### Expected behavior
Get the tutorial expected results
### Environment info
--- SYSTEM INFO ---
Operating System: Ubuntu 24.10
Kernel: Linux 6.11.0-29-generic
Architecture: x86-64
--- PYTHON ---
Python 3.11.13
--- VENV INFO ----
datasets=4.0.0
transformers=4.53
tqdm=4.67.1
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7693/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7693/timeline
| null | null | null | null | 2025-07-20T13:48:06 | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7696
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7696/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7696/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7696/events
|
https://github.com/huggingface/datasets/issues/7696
| 3,253,433,350 |
I_kwDODunzps7B63QG
| 7,696 |
load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
"gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Manalelaidouni",
"id": 25346345,
"login": "Manalelaidouni",
"node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
"organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
"received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
"repos_url": "https://api.github.com/users/Manalelaidouni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Manalelaidouni",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
{
"author_association": "MEMBER",
"body": "Hi ! This is because `datasets` now uses the FFmpeg-based library `torchcodec` instead of the libsndfile-based library `soundfile` to decode audio data. Those two have different decoding implementations",
"created_at": "2025-07-25T14:27:36Z",
"html_url": "https://github.com/huggingface/datasets/issues/7696#issuecomment-3118059961",
"id": 3118059961,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7696",
"node_id": "IC_kwDODunzps652dG5",
"performed_via_github_app": null,
"reactions": {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118059961/reactions"
},
"updated_at": "2025-07-25T14:27:36Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3118059961",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
},
{
"author_association": "NONE",
"body": "I’m all for torchcodec, good luck with the migration!",
"created_at": "2025-07-30T14:22:18Z",
"html_url": "https://github.com/huggingface/datasets/issues/7696#issuecomment-3136587542",
"id": 3136587542,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/7696",
"node_id": "IC_kwDODunzps669IcW",
"performed_via_github_app": null,
"reactions": {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136587542/reactions"
},
"updated_at": "2025-07-30T14:22:18Z",
"url": "https://api.github.com/repos/huggingface/datasets/issues/comments/3136587542",
"user": {
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
"gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Manalelaidouni",
"id": 25346345,
"login": "Manalelaidouni",
"node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
"organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
"received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
"repos_url": "https://api.github.com/users/Manalelaidouni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Manalelaidouni",
"user_view_type": "public"
}
}
] | 2025-07-22T17:02:17 | 2025-07-30T14:22:21 | 2025-07-30T14:22:21 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below).
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(24000))
sample= ds[0]["audio"]["array"]
print(sample)
# sample in 3.6.0
[0.00231914 0.00245417 0.00187414 ... 0.00061956 0.00101157 0.00076325]
# sample in 4.0.0
array([0.00238037, 0.00220794, 0.00198703, ..., 0.00057983, 0.00085863,
0.00115309], dtype=float32)
```
### Expected behavior
The same dataset should load identical samples across versions to maintain reproducibility.
### Environment info
First env:
- datasets version: 3.6.0
- Platform: Windows-10-10.0.26100-SP0
- Python: 3.11.0
Second env:
- datasets version: 4.0.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python: 3.11.13
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
"gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Manalelaidouni",
"id": 25346345,
"login": "Manalelaidouni",
"node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
"organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
"received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
"repos_url": "https://api.github.com/users/Manalelaidouni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Manalelaidouni",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7696/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7696/timeline
| null |
completed
| null | null | 2025-07-22T17:02:17 | 2025-07-30T14:22:21 | 7 days, 21:20:04 | false |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 103