Dataset Viewer
url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
string | type
null | active_lock_reason
null | draft
bool | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7785
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7785/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7785/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7785/events
|
https://github.com/huggingface/datasets/pull/7785
| 3,439,897,018 |
PR_kwDODunzps6pyTM_
| 7,785 |
Fix Audio docstring by removing unsupported mono argument
|
{
"login": "tanuj-rai",
"id": 84439872,
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanuj-rai",
"html_url": "https://github.com/tanuj-rai",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"I think we can keep the arg and add the missing torch.mean() in the Audio.decode_example method"
] | 2025-09-22T09:06:52 | 2025-09-22T09:21:31 | null |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7785",
"html_url": "https://github.com/huggingface/datasets/pull/7785",
"diff_url": "https://github.com/huggingface/datasets/pull/7785.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7785.patch",
"merged_at": null
}
|
This PR fixes issue #7745.
Who can review:
@lhoestq
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7785/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7783
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7783/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7783/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7783/events
|
https://github.com/huggingface/datasets/pull/7783
| 3,430,715,779 |
PR_kwDODunzps6pT7pg
| 7,783 |
Adapt and test huggingface_hub v1.0.0.rc0
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7783). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-18T14:45:20 | 2025-09-22T13:03:29 | null |
CONTRIBUTOR
| null | null | true |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7783",
"html_url": "https://github.com/huggingface/datasets/pull/7783",
"diff_url": "https://github.com/huggingface/datasets/pull/7783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7783.patch",
"merged_at": null
}
|
Test as part of https://github.com/huggingface/huggingface_hub/issues/3340
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7783/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7782
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7782/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7782/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7782/events
|
https://github.com/huggingface/datasets/pull/7782
| 3,430,341,875 |
PR_kwDODunzps6pSozj
| 7,782 |
set dev version
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7782). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-18T13:15:56 | 2025-09-18T13:20:03 | 2025-09-18T13:16:04 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7782",
"html_url": "https://github.com/huggingface/datasets/pull/7782",
"diff_url": "https://github.com/huggingface/datasets/pull/7782.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7782.patch",
"merged_at": "2025-09-18T13:16:04"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7782/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7781
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7781/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7781/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7781/events
|
https://github.com/huggingface/datasets/pull/7781
| 3,430,332,841 |
PR_kwDODunzps6pSm0C
| 7,781 |
release: 4.1.1
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7781). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-18T13:13:47 | 2025-09-18T13:16:48 | 2025-09-18T13:14:47 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7781",
"html_url": "https://github.com/huggingface/datasets/pull/7781",
"diff_url": "https://github.com/huggingface/datasets/pull/7781.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7781.patch",
"merged_at": "2025-09-18T13:14:47"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7781/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7780
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7780/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7780/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7780/events
|
https://github.com/huggingface/datasets/issues/7780
| 3,429,267,259 |
I_kwDODunzps7MZnc7
| 7,780 |
BIGPATENT dataset inaccessible (deprecated script loader)
|
{
"login": "ishmaifan",
"id": 137755081,
"node_id": "U_kgDOCDX5yQ",
"avatar_url": "https://avatars.githubusercontent.com/u/137755081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ishmaifan",
"html_url": "https://github.com/ishmaifan",
"followers_url": "https://api.github.com/users/ishmaifan/followers",
"following_url": "https://api.github.com/users/ishmaifan/following{/other_user}",
"gists_url": "https://api.github.com/users/ishmaifan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ishmaifan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishmaifan/subscriptions",
"organizations_url": "https://api.github.com/users/ishmaifan/orgs",
"repos_url": "https://api.github.com/users/ishmaifan/repos",
"events_url": "https://api.github.com/users/ishmaifan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ishmaifan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hi ! I opened https://huggingface.co/datasets/NortheasternUniversity/big_patent/discussions/7 to update the dataset, hopefully it's merged soon !"
] | 2025-09-18T08:25:34 | 2025-09-19T14:35:54 | null |
NONE
| null | null | null | null |
dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be accessed with datasets>=4.x.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7780/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7779
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7779/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7779/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7779/events
|
https://github.com/huggingface/datasets/pull/7779
| 3,427,108,011 |
PR_kwDODunzps6pHnI4
| 7,779 |
fix empty dataset to_parquet
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7779). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-17T17:03:56 | 2025-09-17T17:07:35 | 2025-09-17T17:04:32 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7779",
"html_url": "https://github.com/huggingface/datasets/pull/7779",
"diff_url": "https://github.com/huggingface/datasets/pull/7779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7779.patch",
"merged_at": "2025-09-17T17:04:32"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7779/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7778
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7778/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7778/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7778/events
|
https://github.com/huggingface/datasets/pull/7778
| 3,425,917,119 |
PR_kwDODunzps6pDkX-
| 7,778 |
[FIX] force spawning pool for MacOS
|
{
"login": "burtenshaw",
"id": 19620375,
"node_id": "MDQ6VXNlcjE5NjIwMzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/19620375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/burtenshaw",
"html_url": "https://github.com/burtenshaw",
"followers_url": "https://api.github.com/users/burtenshaw/followers",
"following_url": "https://api.github.com/users/burtenshaw/following{/other_user}",
"gists_url": "https://api.github.com/users/burtenshaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/burtenshaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/burtenshaw/subscriptions",
"organizations_url": "https://api.github.com/users/burtenshaw/orgs",
"repos_url": "https://api.github.com/users/burtenshaw/repos",
"events_url": "https://api.github.com/users/burtenshaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/burtenshaw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7778). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"After more discussions on slack, we can switch the default to spawn.\r\n\r\nLet's use multiprocess instead of multiprocessing and maybe add a check to apply this only on Macos ?"
] | 2025-09-17T11:38:38 | 2025-09-18T17:04:45 | null |
NONE
| null | null | true |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7778",
"html_url": "https://github.com/huggingface/datasets/pull/7778",
"diff_url": "https://github.com/huggingface/datasets/pull/7778.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7778.patch",
"merged_at": null
}
|
This PR gets multiprocessing to work on mac os:
```python
from datasets import load_dataset
ds = load_dataset("fka/awesome-chatgpt-prompts", split="train").take(100)
ds = ds.map(lambda x: x, num_proc=4)
ds.push_to_hub("burtenshaw/dataset-test", num_proc=4)
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7778/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7777
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7777/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7777/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7777/events
|
https://github.com/huggingface/datasets/issues/7777
| 3,424,462,082 |
I_kwDODunzps7MHSUC
| 7,777 |
push_to_hub not overwriting but stuck in a loop when there are existing commits
|
{
"login": "Darejkal",
"id": 55143337,
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darejkal",
"html_url": "https://github.com/Darejkal",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"HTTP 412 means a commit happened in the meantime, so `get_deletions_and_dataset_card` has to retry to get the latest version of the dataset card and what files to delete based on the latest version of the dataset repository\n\nAre you running other operations in the dataset repo for your push_to_hub ?",
"There was only a map() followed by a push_to_hub(). The repo had one prior commit also by using push_to_hub(). The error disappeared when I downgraded datasets to 4.0.0.",
"It is reproducible if you use finegrained token with Read+Write (Open pull request) access to only that repo.",
"Ah it was due to the use of requests_cache with POST methods, closing this. "
] | 2025-09-17T03:15:35 | 2025-09-17T19:31:14 | 2025-09-17T19:31:14 |
NONE
| null | null | null | null |
### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The code will stuck in time.sleep loop for `get_deletions_and_dataset_card`. If error is explicitly printed, the error is HTTP 412.
### Expected behavior
New datasets overwrite existing one on repo.
### Environment info
datasets 4.1.0
|
{
"login": "Darejkal",
"id": 55143337,
"node_id": "MDQ6VXNlcjU1MTQzMzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darejkal",
"html_url": "https://github.com/Darejkal",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
"organizations_url": "https://api.github.com/users/Darejkal/orgs",
"repos_url": "https://api.github.com/users/Darejkal/repos",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darejkal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7777/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7776
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7776/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7776/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7776/events
|
https://github.com/huggingface/datasets/pull/7776
| 3,420,364,069 |
PR_kwDODunzps6ow4yI
| 7,776 |
[docs] Fix broken WebDataset link on “Create a video dataset” page
|
{
"login": "Username46786",
"id": 98800422,
"node_id": "U_kgDOBeOTJg",
"avatar_url": "https://avatars.githubusercontent.com/u/98800422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Username46786",
"html_url": "https://github.com/Username46786",
"followers_url": "https://api.github.com/users/Username46786/followers",
"following_url": "https://api.github.com/users/Username46786/following{/other_user}",
"gists_url": "https://api.github.com/users/Username46786/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Username46786/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Username46786/subscriptions",
"organizations_url": "https://api.github.com/users/Username46786/orgs",
"repos_url": "https://api.github.com/users/Username46786/repos",
"events_url": "https://api.github.com/users/Username46786/events{/privacy}",
"received_events_url": "https://api.github.com/users/Username46786/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-09-16T04:49:32 | 2025-09-16T04:49:32 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7776",
"html_url": "https://github.com/huggingface/datasets/pull/7776",
"diff_url": "https://github.com/huggingface/datasets/pull/7776.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7776.patch",
"merged_at": null
}
|
### What
Fix the "WebDataset documentation" link on the Create a video dataset page to point
to the correct section on the video load guide.
### Why
The link currently points to an external repo, but the Hugging Face docs
have an internal "WebDataset" section under video_load.
### How
- docs/source/video_dataset.mdx: updated link to
`https://huggingface.co/docs/datasets/main/en/video_load#webdataset`
### Issue
Fixes #7699
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7776/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7775
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7775/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7775/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7775/events
|
https://github.com/huggingface/datasets/pull/7775
| 3,418,859,494 |
PR_kwDODunzps6or2J2
| 7,775 |
fix iterate nested field
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7775). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-15T17:28:34 | 2025-09-15T17:31:14 | 2025-09-15T17:28:42 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7775",
"html_url": "https://github.com/huggingface/datasets/pull/7775",
"diff_url": "https://github.com/huggingface/datasets/pull/7775.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7775.patch",
"merged_at": "2025-09-15T17:28:42"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7775/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7774
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7774/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7774/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7774/events
|
https://github.com/huggingface/datasets/pull/7774
| 3,418,712,977 |
PR_kwDODunzps6orVvQ
| 7,774 |
Set dev version
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7774). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-15T16:42:33 | 2025-09-15T16:45:16 | 2025-09-15T16:42:47 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7774",
"html_url": "https://github.com/huggingface/datasets/pull/7774",
"diff_url": "https://github.com/huggingface/datasets/pull/7774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7774.patch",
"merged_at": "2025-09-15T16:42:47"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7774/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7773
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7773/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7773/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7773/events
|
https://github.com/huggingface/datasets/pull/7773
| 3,418,672,306 |
PR_kwDODunzps6orM4C
| 7,773 |
Release: 4.1.0
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7773). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-15T16:30:37 | 2025-09-15T16:33:40 | 2025-09-15T16:33:39 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7773",
"html_url": "https://github.com/huggingface/datasets/pull/7773",
"diff_url": "https://github.com/huggingface/datasets/pull/7773.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7773.patch",
"merged_at": "2025-09-15T16:33:39"
}
| null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7773/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7772
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7772/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7772/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7772/events
|
https://github.com/huggingface/datasets/issues/7772
| 3,417,353,751 |
I_kwDODunzps7LsK4X
| 7,772 |
Error processing scalar columns using tensorflow.
|
{
"login": "khteh",
"id": 3871483,
"node_id": "MDQ6VXNlcjM4NzE0ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3871483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khteh",
"html_url": "https://github.com/khteh",
"followers_url": "https://api.github.com/users/khteh/followers",
"following_url": "https://api.github.com/users/khteh/following{/other_user}",
"gists_url": "https://api.github.com/users/khteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khteh/subscriptions",
"organizations_url": "https://api.github.com/users/khteh/orgs",
"repos_url": "https://api.github.com/users/khteh/repos",
"events_url": "https://api.github.com/users/khteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/khteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-09-15T10:36:31 | 2025-09-15T10:49:17 | null |
NONE
| null | null | null | null |
`datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'end_idx', 'input_ids', 'attention_mask', 'start_positions', 'end_positions']
features:{'question': Value('string'), 'sentences': Value('string'), 'answer': Value('string'), 'str_idx': Value('int64'), 'end_idx': Value('int64'), 'input_ids': List(Value('int32')), 'attention_mask': List(Value('int8')), 'start_positions': Value('int64'), 'end_positions': Value('int64')}
```
`train_ds_tensor = train_ds['start_positions'].to_tensor(shape=(-1,1))` hits the following error:
```
AttributeError: 'Column' object has no attribute 'to_tensor'
```
`tf.reshape(train_ds['start_positions'], shape=[-1,1])` hits the following error:
```
TypeError: Scalar tensor has no `len()`
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7772/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7771
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7771/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7771/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7771/events
|
https://github.com/huggingface/datasets/pull/7771
| 3,414,655,424 |
PR_kwDODunzps6ody5P
| 7,771 |
Add support for arrow iterable when concatenating or interleaving
|
{
"login": "radulescupetru",
"id": 26553095,
"node_id": "MDQ6VXNlcjI2NTUzMDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/26553095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/radulescupetru",
"html_url": "https://github.com/radulescupetru",
"followers_url": "https://api.github.com/users/radulescupetru/followers",
"following_url": "https://api.github.com/users/radulescupetru/following{/other_user}",
"gists_url": "https://api.github.com/users/radulescupetru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/radulescupetru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/radulescupetru/subscriptions",
"organizations_url": "https://api.github.com/users/radulescupetru/orgs",
"repos_url": "https://api.github.com/users/radulescupetru/repos",
"events_url": "https://api.github.com/users/radulescupetru/events{/privacy}",
"received_events_url": "https://api.github.com/users/radulescupetru/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Seeing the following numbers on the script shared in the original issue. (MacBook Pro M4)\r\n\r\n```\r\n1000it [00:00, 4074.63it/s] # ds_a.with_format(\"torch\")\r\n1000it [00:01, 593.39it/s] # ds_a.shuffle()\r\n1999it [00:03, 594.09it/s] # datasets.interleave_datasets([ds_a, ds_b])\r\n1000it [00:00, 5382.45it/s] # ds_a.shuffle().with_format(\"torch\") <--- Was slow <2it/s\r\n1999it [00:00, 4743.45it/s] # datasets.interleave_datasets([ds_a, ds_b]).with_format(\"torch\") <--- Was slow <2it/s\r\n1999it [00:20, 98.94it/s] # torch.tensor(example[\"tensor\"])\r\n```\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7771). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq I've implemented the iteration on arrow as separate methods, can you take another look/trigger ci? ",
"@lhoestq Any idea why the integration tests are failing, is this expected? Anything I can do on my side?",
"They seem unrelated to your changes. Merging :)"
] | 2025-09-14T06:40:50 | 2025-09-17T16:51:28 | 2025-09-17T16:51:28 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7771",
"html_url": "https://github.com/huggingface/datasets/pull/7771",
"diff_url": "https://github.com/huggingface/datasets/pull/7771.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7771.patch",
"merged_at": "2025-09-17T16:51:27"
}
|
Fixes a case when concatenating or interleaving datasets with `with_format(...)` call was slower.
Details here: https://github.com/huggingface/datasets/issues/6637
@lhoestq I tried to minimize the duplication between iter and iter_arrow methods, not sure if this is against the design, can separate those if needed.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7771/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7770
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7770/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7770/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7770/events
|
https://github.com/huggingface/datasets/pull/7770
| 3,413,892,226 |
PR_kwDODunzps6obQdR
| 7,770 |
Fix: Correct float feature generation in `generate_examples`
|
{
"login": "Sanjaykumar030",
"id": 183703408,
"node_id": "U_kgDOCvMXcA",
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sanjaykumar030",
"html_url": "https://github.com/Sanjaykumar030",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hey @lhoestq, just following up on this — it fixes float feature generation in `generate_examples`. Thanks!"
] | 2025-09-13T17:37:09 | 2025-09-17T18:37:25 | null |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7770",
"html_url": "https://github.com/huggingface/datasets/pull/7770",
"diff_url": "https://github.com/huggingface/datasets/pull/7770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7770.patch",
"merged_at": null
}
|
This PR fixes a bug in the `generate_examples` function where `datasets.Value` features with a `float` dtype were incorrectly generated using `np.random.randint`. This resulted in integer values being cast to float, which is not representative of true floating-point data.
**Key changes include:**
* Added explicit handling for `float` features using `np.random.rand` to generate continuous values.
* Introduced fail-fast type checks for unsupported dtypes to improve robustness.
* Added validation for sequence features to ensure `seq_shapes` is provided.
### Before Fix
Float features were generated incorrectly as integers cast to float:
```text
- Example 0:
- int_feature: 0
- float_feature: 9.0 <-- Incorrect: An integer disguised as a float
- string_feature: The small grey turtle was surprisingly fast...
- seq_feature: [0.3048 0.4291 0.4283]
```
### After Fix
Float features are now correctly generated as continuous numbers in the range [0, 1):
```text
+ Example 0:
+ int_feature: 0
+ float_feature: 0.0183 <-- Correct: A true random float
+ string_feature: The small grey turtle was surprisingly fast...
+ seq_feature: [0.9237 0.7972 0.8526]
```
#### Note: This PR is a follow-up/fix of the previously closed PR #7769 for clarity and context.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7770/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7769
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7769/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7769/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7769/events
|
https://github.com/huggingface/datasets/pull/7769
| 3,413,868,583 |
PR_kwDODunzps6obLVK
| 7,769 |
Fix: Correct float feature generation in `generate_examples`
|
{
"login": "Sanjaykumar030",
"id": 183703408,
"node_id": "U_kgDOCvMXcA",
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sanjaykumar030",
"html_url": "https://github.com/Sanjaykumar030",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-09-13T17:19:36 | 2025-09-13T17:30:15 | 2025-09-13T17:30:15 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7769",
"html_url": "https://github.com/huggingface/datasets/pull/7769",
"diff_url": "https://github.com/huggingface/datasets/pull/7769.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7769.patch",
"merged_at": null
}
|
This PR fixes a bug in the `generate_examples` function where `datasets.Value` features with a `float` dtype were incorrectly generated using `np.random.randint`. This resulted in integer values being cast to float, which is not representative of true floating-point data.
**Key changes include:**
1. Added explicit handling for float features using `np.random.rand` to generate continuous values.
2. Introduced fail-fast type checks for unsupported dtypes to improve robustness.
3. Added validation for sequence features to ensure `seq_shapes` is provided.
### Before Fix
Float features were generated incorrectly as integers cast to float:
```text
Example 0:
int_feature: 0
float_feature: 9.0 <-- Incorrect: An integer disguised as a float
string_feature: The small grey turtle was surprisingly fast...
seq_feature: [0.3048 0.4291 0.4283]
```
### After Fix
Float features are now correctly generated as continuous numbers in the range [0, 1):
```text
Example 0:
int_feature: 0
float_feature: 0.0183 <-- Correct: A true random float
string_feature: The small grey turtle was surprisingly fast...
seq_feature: [0.9237 0.7972 0.8526]
|
{
"login": "Sanjaykumar030",
"id": 183703408,
"node_id": "U_kgDOCvMXcA",
"avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sanjaykumar030",
"html_url": "https://github.com/Sanjaykumar030",
"followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
"following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
"gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
"organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
"repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
"events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7769/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7768
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7768/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7768/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7768/events
|
https://github.com/huggingface/datasets/pull/7768
| 3,413,755,917 |
PR_kwDODunzps6oa1A7
| 7,768 |
Custom `dl_manager` in `load_dataset`
|
{
"login": "ain-soph",
"id": 13214530,
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ain-soph",
"html_url": "https://github.com/ain-soph",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-09-13T16:09:45 | 2025-09-13T16:09:45 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7768",
"html_url": "https://github.com/huggingface/datasets/pull/7768",
"diff_url": "https://github.com/huggingface/datasets/pull/7768.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7768.patch",
"merged_at": null
}
|
Fix #7767
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7768/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7767
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7767/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7767/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7767/events
|
https://github.com/huggingface/datasets/issues/7767
| 3,411,654,444 |
I_kwDODunzps7LWbcs
| 7,767 |
Custom `dl_manager` in `load_dataset`
|
{
"login": "ain-soph",
"id": 13214530,
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ain-soph",
"html_url": "https://github.com/ain-soph",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[] | 2025-09-12T19:06:23 | 2025-09-12T19:07:52 | null |
NONE
| null | null | null | null |
### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# Create a dataset builder
builder_instance = load_dataset_builder(
path=path,
name=name,
data_dir=data_dir,
data_files=data_files,
cache_dir=cache_dir,
features=features,
download_config=download_config,
download_mode=download_mode,
revision=revision,
token=token,
storage_options=storage_options,
**config_kwargs,
)
# Return iterable dataset in case of streaming
if streaming:
return builder_instance.as_streaming_dataset(split=split)
# Note: This is the revised part
if dl_manager is None:
if download_config is None:
download_config = DownloadConfig(
cache_dir=builder_instance._cache_downloaded_dir,
force_download=download_mode == DownloadMode.FORCE_REDOWNLOAD,
force_extract=download_mode == DownloadMode.FORCE_REDOWNLOAD,
use_etag=False,
num_proc=num_proc,
token=builder_instance.token,
storage_options=builder_instance.storage_options,
) # We don't use etag for data files to speed up the process
dl_manager = DownloadManager(
dataset_name=builder_instance.dataset_name,
download_config=download_config,
data_dir=builder_instance.config.data_dir,
record_checksums=(
builder_instance._record_infos or verification_mode == VerificationMode.ALL_CHECKS
),
)
# Download and prepare data
builder_instance.download_and_prepare(
download_config=download_config,
download_mode=download_mode,
verification_mode=verification_mode,
dl_manager=dl_manager, # pass the new argument
num_proc=num_proc,
storage_options=storage_options,
)
...
```
### Motivation
In my case, I'm hoping to deal with the cache files downloading manually (not using hash filenames and save to another location, or using potential existing local files).
### Your contribution
It's already implemented above. If maintainers think this should be considered, I'll open a PR.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7767/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7766
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7766/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7766/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7766/events
|
https://github.com/huggingface/datasets/issues/7766
| 3,411,611,165 |
I_kwDODunzps7LWQ4d
| 7,766 |
cast columns to Image/Audio/Video with `storage_options`
|
{
"login": "ain-soph",
"id": 13214530,
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ain-soph",
"html_url": "https://github.com/ain-soph",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[] | 2025-09-12T18:51:01 | 2025-09-12T18:51:01 | null |
NONE
| null | null | null | null |
### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
# dataset = dataset.cast_column("image_path", datasets.Image()) # now works without `storage_options`
# expected behavior
dataset = dataset.cast_column("image_path", datasets.Image(), storage_options={"anon": True})
```
### Motivation
I'm using my own registered fsspec filesystem (s3 with customized local cache support). I need to pass cache folder paths `cache_dirs: list[str]` to the filesystem when I read the remote images (cast from file_paths).
### Your contribution
Could help with a PR at weekends
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7766/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7765
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7765/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7765/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7765/events
|
https://github.com/huggingface/datasets/issues/7765
| 3,411,556,378 |
I_kwDODunzps7LWDga
| 7,765 |
polars dataset cannot cast column to Image/Audio/Video
|
{
"login": "ain-soph",
"id": 13214530,
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ain-soph",
"html_url": "https://github.com/ain-soph",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"I fixed this with a combination of `to_dict` and `from_dict`:\n\n```py\ndatasets.Dataset.from_dict(df.to_dict(as_series=False))\n```",
"@samuelstevens Yeah, I'm using similar workaround as well. But it would be ideal if we can avoid the copy."
] | 2025-09-12T18:32:49 | 2025-09-16T01:33:31 | null |
NONE
| null | null | null | null |
### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_polars(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # raises Error
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from large_string to struct using function cast_struct
# pandas
df = pd.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_pandas(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
# dict
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
```
### Expected behavior
`from_polars` case shouldn't raise error and have the same outputs as `from_pandas` and `from_dict`
### Environment info
```
# Name Version Build Channel
datasets 4.0.0 pypi_0 pypi
pandas 2.3.1 pypi_0 pypi
polars 1.32.3 pypi_0 pypi
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7765/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7765/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7764
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7764/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7764/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7764/events
|
https://github.com/huggingface/datasets/pull/7764
| 3,410,722,819 |
PR_kwDODunzps6oQltc
| 7,764 |
update torchcodec in ci
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7764). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-12T14:26:42 | 2025-09-12T15:56:16 | 2025-09-12T15:56:14 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7764",
"html_url": "https://github.com/huggingface/datasets/pull/7764",
"diff_url": "https://github.com/huggingface/datasets/pull/7764.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7764.patch",
"merged_at": "2025-09-12T15:56:14"
}
|
before the release, to make sure everything works fine
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7764/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7763
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7763/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7763/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7763/events
|
https://github.com/huggingface/datasets/pull/7763
| 3,407,833,429 |
PR_kwDODunzps6oGx51
| 7,763 |
Bump dill to 0.4.0
|
{
"login": "Bomme",
"id": 13520622,
"node_id": "MDQ6VXNlcjEzNTIwNjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/13520622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bomme",
"html_url": "https://github.com/Bomme",
"followers_url": "https://api.github.com/users/Bomme/followers",
"following_url": "https://api.github.com/users/Bomme/following{/other_user}",
"gists_url": "https://api.github.com/users/Bomme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bomme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bomme/subscriptions",
"organizations_url": "https://api.github.com/users/Bomme/orgs",
"repos_url": "https://api.github.com/users/Bomme/repos",
"events_url": "https://api.github.com/users/Bomme/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bomme/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"Have you tried to run `pytest tests/test_fingerprint.py` ? It seems dill 0.3.9 breaks a lot of tests\r\n\r\n```\r\nFAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_regex - NameError: name 'log' is not defined\r\nFAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer - NameError: name 'log' is not defined\r\nFAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer_with_cache - NameError: name 'log' is not defined\r\nFAILED tests/test_fingerprint.py::RecurseHashTest::test_hash_ignores_line_definition_of_function - AssertionError: 'c48ebfacf8768f50' != '27e49d047c02c83b'\r\nFAILED tests/test_fingerprint.py::RecurseHashTest::test_hash_ipython_function - AssertionError: '65edc6b6d425a8e9' != '9f364fe298fb286a'\r\nFAILED tests/test_fingerprint.py::HashingTest::test_hash_tiktoken_encoding - NameError: name 'log' is not defined\r\nFAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_compiled_module - NameError: name 'log' is not defined\r\nFAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_generator - NameError: name 'log' is not defined\r\nFAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_tensor - NameError: name 'log' is not defined\r\nFAILED tests/test_fingerprint.py::HashingTest::test_set_doesnt_depend_on_order - NameError: name 'log' is not defined\r\nFAILED tests/test_fingerprint.py::HashingTest::test_set_stable - NameError: name 'log' is not defined\r\nFAILED tests/test_fingerprint.py::test_move_script_doesnt_change_hash - AssertionError: assert b'93072ca404a697db\\n' == b'cf89a7e497a97e32\\n'\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7763). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @lhoestq! Yes, I did. It's not really `dill` that breaks things. Rather the shims that `datasets` has in place did not include the next version. \n\nFYI: I also tested it with `dill-0.4.0` and the changes would need to be analogous, but I wanted to be conservative in this PR. ",
"The NameError is fixed in your PR since it defines the right `log()` function for 0.3.9.\r\n\r\nBut I'm less sure about the AssertionError that may be related to deterministic hashing or ipython/shell function hashing. We would need to solve these\r\n\r\nEDIT: ah actually it does ! cool ! let me update the branch and re-run the CI"
] | 2025-09-11T19:43:16 | 2025-09-15T08:37:48 | 2025-09-15T08:37:48 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7763",
"html_url": "https://github.com/huggingface/datasets/pull/7763",
"diff_url": "https://github.com/huggingface/datasets/pull/7763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7763.patch",
"merged_at": "2025-09-15T08:37:48"
}
|
This bumps `dill` to 0.3.9 and closes #7510
It turns out the only thing required to make the tests pass was to extend the version checks to include 0.3.9.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7763/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7762
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7762/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7762/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7762/events
|
https://github.com/huggingface/datasets/pull/7762
| 3,406,885,775 |
PR_kwDODunzps6oDiF2
| 7,762 |
Parquet: use data page v2 for efficient page pruning
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7762). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Closing this since it looks like the page offset index is enough :)"
] | 2025-09-11T14:42:22 | 2025-09-11T15:24:25 | 2025-09-11T15:24:24 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7762",
"html_url": "https://github.com/huggingface/datasets/pull/7762",
"diff_url": "https://github.com/huggingface/datasets/pull/7762.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7762.patch",
"merged_at": null
}
|
This is needed to enable page pruning with DataFusion, which will be useful for the Dataset Viewer.
Indeed page pruning with DataFusion allows to download only certain pages of a row group, reducing the I/O required to read just a few rows.
But while data page v1 generally works, it's not easy with DataFusion to do page pruning on datasets with nested data. This is because rows can span multiple pages in v1, contrary to v2.
cc @severo for viz
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7762/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7761
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7761/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7761/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7761/events
|
https://github.com/huggingface/datasets/pull/7761
| 3,402,787,999 |
PR_kwDODunzps6n1bls
| 7,761 |
Audio: use TorchCodec instead of Soundfile for encoding
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7761). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-10T14:47:07 | 2025-09-10T15:09:36 | 2025-09-10T15:09:35 |
MEMBER
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7761",
"html_url": "https://github.com/huggingface/datasets/pull/7761",
"diff_url": "https://github.com/huggingface/datasets/pull/7761.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7761.patch",
"merged_at": "2025-09-10T15:09:35"
}
|
this removes the dependency on Soundfile completely
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7761/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7760
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7760/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7760/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7760/events
|
https://github.com/huggingface/datasets/issues/7760
| 3,401,799,485 |
I_kwDODunzps7Kw1c9
| 7,760 |
Hugging Face Hub Dataset Upload CAS Error
|
{
"login": "n-bkoe",
"id": 142820182,
"node_id": "U_kgDOCINDVg",
"avatar_url": "https://avatars.githubusercontent.com/u/142820182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n-bkoe",
"html_url": "https://github.com/n-bkoe",
"followers_url": "https://api.github.com/users/n-bkoe/followers",
"following_url": "https://api.github.com/users/n-bkoe/following{/other_user}",
"gists_url": "https://api.github.com/users/n-bkoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n-bkoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n-bkoe/subscriptions",
"organizations_url": "https://api.github.com/users/n-bkoe/orgs",
"repos_url": "https://api.github.com/users/n-bkoe/repos",
"events_url": "https://api.github.com/users/n-bkoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/n-bkoe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"cc @jsulz maybe ?",
"Curious! I took a look at this and was unable to see why this would be occurring on our side. Tagging in @jgodlew and @bpronan since they might have insights. \n\n@n-bkoe just a few questions if you wouldn't mind: \n1. What kind of data are you uploading and what is the difference in file size (in bytes) between 100 and 10,000 samples?\n2. Could you provide a specific repository where you encountered this so we could look at to attempt to trace this in our systems?\n3. I cannot currently reproduce this, but I'm just trying locally; have you tried to attempt this outside of SageMaker? I'm wondering if there is something unique about that environment causing this. \n4. How/where did you set `HF_HUB_DISABLE_XET`?",
"Hi, and thank you for your quick answer 🙏 \n\n1. Its fairly simple string data, four cols, all string, some long. The script works for data up to 8000 samples long, which is two parquet files totalling 260 kb. It breaks at 10k. \n2. Unfortunately, both data and code is private for now !\n3. I will try \n4. I did it both at CLI level when call my script, and tried inside the python script with os.environ[\"HF_HUB_DISABLE_XET\"] = \"1\"\n\nThe load is also partial, it starts for one file, but does not complete and no data file is pushed. \n\n```\n5. Pushing to Hugging Face Hub...\nPushing dataset to YourOrg/dataset-10000-test_set...\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 1235.07ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:37.018887Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFGSQV1FH8846S0DNS91C): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 291kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 291kB, 0.00B/s \n❌ Failed to push test_set: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\nPushing dataset to YourOrg/dataset-10000-indic_test_set...\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 1289.10ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:37.721996Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFHFPJ2DC5D6JC93172H9): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 277kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 277kB, 0.00B/s \n❌ Failed to push indic_test_set: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\nPushing dataset to YourOrg/dataset-10000-indic_test_set_combined...\nCreating parquet from Arrow format: 100%|███████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 1310.04ba/s]\nProcessing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-11T15:14:38.685575Z ERROR Fatal Error: \"cas::upload_xorb\" api call failed (request id 01K4WNFJDTVAYM9MFTRDSWKTD6): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)\n at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113\n\nProcessing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s \nNew Data Upload : 0%| | 0.00B / 184kB, 0.00B/s \n❌ Failed to push indic_test_set_combined: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\nUploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]\n\nSummary:\n Succeeded: None\n Failed: [('test_set', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'), ('indic_test_set', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'), ('indic_test_set_combined', 'Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')]\n❌ Some datasets failed to upload\n```\n\n",
"Thanks for following up with more details, @n-bkoe \n\nCould you tell me more about your Sagemaker environment and how you are running this script? In testing with your steps to reproduce in a Sagemaker Jupyter notebook instance (and uploading Parquet datasets with splits of anywhere from a few KBs to a few hundred MBs), I've yet to reproduce this error. This makes me believe that it's either something about the Sagemaker environment or the reproduction steps that I'm not yet emulating. \n\nConcerning the `HF_HUB_DISABLE_XET` flag, you should ensure it is set before any package imports and in the same process where you are running the script itself. If either aren't true, then this environment variable will not work. You could also explicitly uninstall `hf-xet` from the environment, although that should be unnecessary with the `HF_HUB_DISABLE_XET` flag."
] | 2025-09-10T10:01:19 | 2025-09-16T20:01:36 | null |
NONE
| null | null | null | null |
### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for smaller files.
Exact error message :
```
Processing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-10T09:44:35.657565Z ERROR Fatal Error: "cas::upload_xorb" api call failed (request id 01b[...]XXX): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX)
at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113
Processing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s
New Data Upload : 0%| | 0.00B / 184kB, 0.00B/s
❌ Failed to push some_dataset: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX
```
Workaround Attempts
1. **Disabled XET**: Set `HF_HUB_DISABLE_XET=1` environment variable
2. **Updated hf-xet**: Use `hf-xet==1.1.9` rather than latest
3. **Verified Authentication**: Confirmed HF token is valid and has write permissions
4. **Tested with Smaller Datasets**:
- 100 samples: ✅ **SUCCESS** (uploaded successfully)
- 10,000 samples: ❌ **FAILS** (401 Unauthorized)
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
# Create dataset (example with 10,000 samples)
dataset = Dataset.from_dict({
"question": questions,
"answer": answers,
# ... other fields
})
# Split into train/test
dataset_dict = dataset.train_test_split(test_size=0.1)
# Upload to Hub
dataset_dict.push_to_hub("Org/some-dataset")
```
### Expected behavior
## Expected Behavior
- Dataset should upload successfully to Hugging Face Hub
- Progress bars should complete without authentication errors
- Dataset should be accessible at the specified repository URL
## Actual Behavior
- Upload fails consistently with 401 Unauthorized error
- Error occurs specifically during CAS service interaction
- No progress is made on the upload (0% completion)
- Dataset is created on Hugging Face Hub with no data folder
### Environment info
- **Platform**: SageMaker (AWS)
- **Python Version**: 3.12
- **Libraries**:
- `datasets` library (latest version)
- `hf-xet==1.1.9` (attempted fix)
- **Authentication**: Hugging Face token configured
- **Dataset Size**: ~10,000 samples, works for smaller sizes (e.g. 100)
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7760/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7759
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7759/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7759/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7759/events
|
https://github.com/huggingface/datasets/issues/7759
| 3,398,099,513 |
I_kwDODunzps7KiuI5
| 7,759 |
Comment/feature request: Huggingface 502s from GHA
|
{
"login": "Scott-Simmons",
"id": 52365471,
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Scott-Simmons",
"html_url": "https://github.com/Scott-Simmons",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-09-09T11:59:20 | 2025-09-09T13:02:28 | null |
NONE
| null | null | null | null |
This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `https://datasets-server.huggingface.co/info\?dataset\=livebench/math` were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/48921123754) (that link will expire eventually, [here are the logs](https://github.com/user-attachments/files/22233578/logs_44225296943.zip)).
When invoked from actions, it appeared to be consistently failing for ~6 hours. However, these 502s never occurred when the request was invoked from my local machine in that same time period.
I suspect that this is related to how the requests are routed with github actions versus locally.
Its not clear to me if the request even reached huggingface servers or if its the github proxy that stopped it from going through, but I wanted to report it nonetheless in case this is helpful information. I'm curious if huggingface can do anything on their end to confirm cause.
And a feature request for if this happens in the future (assuming huggingface has visibilty on it): A "datasets status" page highlighting if 502s occur for specific individual datasets could be useful for people debugging on the other end of this!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7759/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7758
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7758/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7758/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7758/events
|
https://github.com/huggingface/datasets/issues/7758
| 3,395,590,783 |
I_kwDODunzps7KZJp_
| 7,758 |
Option for Anonymous Dataset link
|
{
"login": "egrace479",
"id": 38985481,
"node_id": "MDQ6VXNlcjM4OTg1NDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/egrace479",
"html_url": "https://github.com/egrace479",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}",
"starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egrace479/subscriptions",
"organizations_url": "https://api.github.com/users/egrace479/orgs",
"repos_url": "https://api.github.com/users/egrace479/repos",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"received_events_url": "https://api.github.com/users/egrace479/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[] | 2025-09-08T20:20:10 | 2025-09-08T20:20:10 | null |
NONE
| null | null | null | null |
### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!). However, we have an increasing challenge when it comes to sharing our datasets for paper (both conference and journal) submissions. Due to the need to share data anonymously, we can't use the Hugging Face URLs, but datasets tend to be too large for inclusion as a zip. Being able to have an anonymous link would be great since we can't be double-publishing the data.
### Your contribution
Sorry, I don't have a contribution to make to the implementation of this. Perhaps it would be possible to work off the [Anonymous GitHub](https://github.com/tdurieux/anonymous_github) code to generate something analogous with pointers to the data still on Hugging Face's servers (instead of the duplication of data required for the GitHub version)?
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7758/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7757
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7757/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7757/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7757/events
|
https://github.com/huggingface/datasets/issues/7757
| 3,389,535,011 |
I_kwDODunzps7KCDMj
| 7,757 |
Add support for `.conll` file format in datasets
|
{
"login": "namesarnav",
"id": 88763593,
"node_id": "MDQ6VXNlcjg4NzYzNTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/88763593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/namesarnav",
"html_url": "https://github.com/namesarnav",
"followers_url": "https://api.github.com/users/namesarnav/followers",
"following_url": "https://api.github.com/users/namesarnav/following{/other_user}",
"gists_url": "https://api.github.com/users/namesarnav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/namesarnav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namesarnav/subscriptions",
"organizations_url": "https://api.github.com/users/namesarnav/orgs",
"repos_url": "https://api.github.com/users/namesarnav/repos",
"events_url": "https://api.github.com/users/namesarnav/events{/privacy}",
"received_events_url": "https://api.github.com/users/namesarnav/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] | null |
[
"That would be cool ! feel free to ping me if I can help reviewing a PR"
] | 2025-09-06T07:25:39 | 2025-09-10T14:22:48 | null |
NONE
| null | null | null | null |
### Feature request
I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manually parsed or preprocessed before being loaded into datasets. Having built in support would save time and make workflows smoother for researchers and practitioners.
I propose -
Add a conll dataset builder or file parser to datasets that can:
- Read `.conll` files with customizable delimiters (space, tab).
- Handle sentence/document boundaries (typically indicated by empty lines).
- Support common CoNLL variants (e.g., CoNLL-2000 chunking, CoNLL-2003 NER).
- Output a dataset where each example contains:
- tokens: list of strings
- tags (or similar): list of labels aligned with tokens
Given a .conll snippet like:
```
EU NNP B-ORG
rejects VBZ O
German JJ B-MISC
call NN O
. . O
```
The dataset should load as:
```
{
"tokens": ["EU", "rejects", "German", "call", "."],
"tags": ["B-ORG", "O", "B-MISC", "O", "O"]
}
```
### Motivation
- CoNLL files are a standard benchmark format in NLP (e.g., CoNLL-2003, CoNLL-2000).
- Many users train NER or sequence labeling models (like BERT for token classification) directly on `.conll`
- Right now you have to write your own parsing scripts. Built in support would unify this process and would be much more convenient
### Your contribution
I’d be happy to contribute by implementing this feature. My plan is to-
- Add a new dataset script (conll.py) to handle .conll files.
- Implement parsing logic that supports sentence/document boundaries and token-label alignment.
- Write unit tests with small `.conll` examples to ensure correctness.
- Add documentation and usage examples so new users can easily load `.conll` datasets.
This would be my first open source contribution, so I’ll follow the `CONTRIBUTING.md` guidelines closely and adjust based on feedback from the maintainers.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7757/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7756
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7756/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7756/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7756/events
|
https://github.com/huggingface/datasets/issues/7756
| 3,387,076,693 |
I_kwDODunzps7J4rBV
| 7,756 |
datasets.map(f, num_proc=N) hangs with N>1 when run on import
|
{
"login": "arjunguha",
"id": 20065,
"node_id": "MDQ6VXNlcjIwMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/20065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjunguha",
"html_url": "https://github.com/arjunguha",
"followers_url": "https://api.github.com/users/arjunguha/followers",
"following_url": "https://api.github.com/users/arjunguha/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunguha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arjunguha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunguha/subscriptions",
"organizations_url": "https://api.github.com/users/arjunguha/orgs",
"repos_url": "https://api.github.com/users/arjunguha/repos",
"events_url": "https://api.github.com/users/arjunguha/events{/privacy}",
"received_events_url": "https://api.github.com/users/arjunguha/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-09-05T10:32:01 | 2025-09-05T10:32:01 | null |
NONE
| null | null | null | null |
### Describe the bug
If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs.
### Steps to reproduce the bug
1. Create a file that runs datasets.map at the top-level:
```bash
cat <<EOF > import_me.py
import datasets
the_dataset = datasets.load_dataset("openai/openai_humaneval")
the_dataset = the_dataset.map(lambda item: item, num_proc=2)
EOF
```
2. Start Python REPL:
```bash
uv run --python 3.12.3 --with "datasets==4.0.0" python3
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
```
3. Import the file:
```python
import import_me
````
Observe hang.
### Expected behavior
Ideally would not hang, or would fallback to num_proc=1 with a warning.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7756/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7755
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7755/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7755/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7755/events
|
https://github.com/huggingface/datasets/pull/7755
| 3,386,079,181 |
PR_kwDODunzps6m-MTU
| 7,755 |
Support pathlib.Path for feature input
|
{
"login": "Joshua-Chin",
"id": 5422226,
"node_id": "MDQ6VXNlcjU0MjIyMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5422226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Joshua-Chin",
"html_url": "https://github.com/Joshua-Chin",
"followers_url": "https://api.github.com/users/Joshua-Chin/followers",
"following_url": "https://api.github.com/users/Joshua-Chin/following{/other_user}",
"gists_url": "https://api.github.com/users/Joshua-Chin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Joshua-Chin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Joshua-Chin/subscriptions",
"organizations_url": "https://api.github.com/users/Joshua-Chin/orgs",
"repos_url": "https://api.github.com/users/Joshua-Chin/repos",
"events_url": "https://api.github.com/users/Joshua-Chin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Joshua-Chin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7755). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-09-05T02:38:07 | 2025-09-10T15:19:35 | 2025-09-10T15:19:35 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7755",
"html_url": "https://github.com/huggingface/datasets/pull/7755",
"diff_url": "https://github.com/huggingface/datasets/pull/7755.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7755.patch",
"merged_at": "2025-09-10T15:19:35"
}
|
This PR adds support for specifying image, video, audio, and pdf features using `pathlib.Path`.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7755/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7755/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7754
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7754/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7754/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7754/events
|
https://github.com/huggingface/datasets/pull/7754
| 3,384,883,008 |
PR_kwDODunzps6m6qRo
| 7,754 |
Add columns support to JSON loader
|
{
"login": "ArjunJagdale",
"id": 142811259,
"node_id": "U_kgDOCIMgew",
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunJagdale",
"html_url": "https://github.com/ArjunJagdale",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-09-04T18:21:26 | 2025-09-04T18:21:26 | null |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7754",
"html_url": "https://github.com/huggingface/datasets/pull/7754",
"diff_url": "https://github.com/huggingface/datasets/pull/7754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7754.patch",
"merged_at": null
}
|
New fix to #7594
This PR adds support for the columns argument in the JSON dataset builder.
Added columns parameter to JsonConfig.
Applied column filtering after table creation, filling missing columns with None.
Extended tests to cover:
- Selecting a subset of columns
- Handling missing requested columns
- Column selection on list-of-strings case
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7754/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7753
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7753/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7753/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7753/events
|
https://github.com/huggingface/datasets/issues/7753
| 3,381,831,487 |
I_kwDODunzps7Jkqc_
| 7,753 |
datasets massively slows data reads, even in memory
|
{
"login": "lrast",
"id": 1191040,
"node_id": "MDQ6VXNlcjExOTEwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1191040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lrast",
"html_url": "https://github.com/lrast",
"followers_url": "https://api.github.com/users/lrast/followers",
"following_url": "https://api.github.com/users/lrast/following{/other_user}",
"gists_url": "https://api.github.com/users/lrast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lrast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lrast/subscriptions",
"organizations_url": "https://api.github.com/users/lrast/orgs",
"repos_url": "https://api.github.com/users/lrast/repos",
"events_url": "https://api.github.com/users/lrast/events{/privacy}",
"received_events_url": "https://api.github.com/users/lrast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Hi ! you should try\n\n```python\nfrom datasets import Array3D, Dataset, Features, Value\n\nfeatures = Features({\"image\": Array3D(shape=(3, 224, 224), dtype=\"uint8\"), \"label\": Value(\"uint8\")})\nhf_dataset = Dataset.from_dict({'image': images, 'label':labels}, features=features)\n```\n\notherwise the type of the \"image\" column is List(List(List(Value(\"uint8\")))) and is less efficient.",
"Thanks! This leads to a 10x speedup:\n```python\nimport torch\nimport time\nfrom datasets import Array3D, Dataset, Features, Value\n\nimages = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8)\nlabels = torch.randint(0, 200, (1000,), dtype=torch.uint8)\n\npt_dataset = torch.utils.data.TensorDataset(images, labels)\n\nfeatures = Features({\"image\": Array3D(shape=(3, 224, 224), dtype=\"uint8\"), \"label\": Value(\"uint8\")})\nhf_dataset = Dataset.from_dict({'image': images, 'label':labels}, features=features)\nhf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True)\n\nhf_dataset.set_format('torch', dtype=torch.uint8)\nhf_in_memory.set_format('torch', dtype=torch.uint8)\n\n# measure access speeds\ndef time_access(dataset, img_col):\n start_time = time.time()\n for i in range(1000):\n _ = dataset[i][img_col].shape\n end_time = time.time()\n return end_time - start_time\n\n\nprint(f\"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds\")\nprint(f\"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds\")\nprint(f\"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds\")\n```\nProduces\n```\nIn-memory Tensor access: 0.0026 seconds\nHF Dataset access: 0.2070 seconds\nIn-memory HF Dataset access: 0.2112 seconds\n```\n\nCurious if there is a reason why this is not the default behavior for huggingface image processors?\n```python\nfrom transformers import ViTImageProcessor\nfrom transformers import AutoImageProcessor\n\nfrom datasets import load_dataset\n# Load the dataset\nds = load_dataset('ylecun/mnist', split='train[0:100]')\n\n# Instantiate the processor, explicitly requesting NumPy arrays\nprocessor1 = ViTImageProcessor.from_pretrained('facebook/vit-mae-base', do_convert_rgb=True)\nprocessor2 = AutoImageProcessor.from_pretrained(\"facebook/detr-resnet-50\", use_fast=True)\n\nprocessed1 = ds.map(lambda row: processor1(row['image']))\nprocessed2 = ds.map(lambda row: processor2(row['image']))\n\nprint( type(processed1['pixel_values'][0]), type(processed1['pixel_values'][0]))\n```\nproduces\n```\n<class 'list'> <class 'list'>\n```\n\nI can, of course, manually manipulate the dataset to the use the correct format, but this is fairly standard for images, and the performance implications seem large."
] | 2025-09-04T01:45:24 | 2025-09-18T22:08:51 | null |
NONE
| null | null | null | null |
### Describe the bug
Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag.
The following script reproduces the result with random data, but it applies equally to datasets that are loaded from the hub.
### Steps to reproduce the bug
The following script should reproduce the behavior
```
import torch
import time
from datasets import Dataset
images = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8)
labels = torch.randint(0, 200, (1000,), dtype=torch.uint8)
pt_dataset = torch.utils.data.TensorDataset(images, labels)
hf_dataset = Dataset.from_dict({'image': images, 'label':labels})
hf_dataset.set_format('torch', dtype=torch.uint8)
hf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True)
# measure access speeds
def time_access(dataset, img_col):
start_time = time.time()
for i in range(1000):
_ = dataset[i][img_col].shape
end_time = time.time()
return end_time - start_time
print(f"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds")
print(f"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds")
print(f"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds")
```
### Expected behavior
For me, the above script produces
```
In-memory Tensor access: 0.0025 seconds
HF Dataset access: 2.9317 seconds
In-memory HF Dataset access: 2.8082 seconds
```
I think that this difference is larger than expected.
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-14.7.7-arm64-arm-64bit
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7753/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7752
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7752/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7752/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7752/events
|
https://github.com/huggingface/datasets/pull/7752
| 3,358,374,882 |
PR_kwDODunzps6ljQLy
| 7,752 |
Fix: Update Dill Version in Setup py
|
{
"login": "Navanit-git",
"id": 98005188,
"node_id": "U_kgDOBddwxA",
"avatar_url": "https://avatars.githubusercontent.com/u/98005188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Navanit-git",
"html_url": "https://github.com/Navanit-git",
"followers_url": "https://api.github.com/users/Navanit-git/followers",
"following_url": "https://api.github.com/users/Navanit-git/following{/other_user}",
"gists_url": "https://api.github.com/users/Navanit-git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Navanit-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Navanit-git/subscriptions",
"organizations_url": "https://api.github.com/users/Navanit-git/orgs",
"repos_url": "https://api.github.com/users/Navanit-git/repos",
"events_url": "https://api.github.com/users/Navanit-git/events{/privacy}",
"received_events_url": "https://api.github.com/users/Navanit-git/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"https://github.com/huggingface/datasets/issues/7751",
"same as https://github.com/huggingface/datasets/pull/7763: some tests need to be fixed to support 0.4.0"
] | 2025-08-27T07:39:51 | 2025-09-12T13:21:30 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7752",
"html_url": "https://github.com/huggingface/datasets/pull/7752",
"diff_url": "https://github.com/huggingface/datasets/pull/7752.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7752.patch",
"merged_at": null
}
|
Currently the DIll version is less than 3.9 and now major libraries like Multiprocess, gepa requires Dill version as 0.4.0 and this is making a conflict in installation. So added this small PR to update the DIll.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7752/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7751
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7751/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7751/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7751/events
|
https://github.com/huggingface/datasets/issues/7751
| 3,358,369,976 |
I_kwDODunzps7ILKi4
| 7,751 |
Dill version update
|
{
"login": "Navanit-git",
"id": 98005188,
"node_id": "U_kgDOBddwxA",
"avatar_url": "https://avatars.githubusercontent.com/u/98005188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Navanit-git",
"html_url": "https://github.com/Navanit-git",
"followers_url": "https://api.github.com/users/Navanit-git/followers",
"following_url": "https://api.github.com/users/Navanit-git/following{/other_user}",
"gists_url": "https://api.github.com/users/Navanit-git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Navanit-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Navanit-git/subscriptions",
"organizations_url": "https://api.github.com/users/Navanit-git/orgs",
"repos_url": "https://api.github.com/users/Navanit-git/repos",
"events_url": "https://api.github.com/users/Navanit-git/events{/privacy}",
"received_events_url": "https://api.github.com/users/Navanit-git/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"#7752 ",
"related: #7510 "
] | 2025-08-27T07:38:30 | 2025-09-10T14:24:02 | null |
NONE
| null | null | null | null |
### Describe the bug
Why the datasets is not updating the dill ?
Just want to know if I update the dill version in dill what will be the repucssion.
For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets.
Adding a pr too.
### Steps to reproduce the bug
.
### Expected behavior
.
### Environment info
.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7751/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7750
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7750/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7750/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7750/events
|
https://github.com/huggingface/datasets/pull/7750
| 3,357,275,291 |
PR_kwDODunzps6lfwcx
| 7,750 |
Refactor: use unpacking in load.py for time and memory improvement
|
{
"login": "brchristian",
"id": 2460418,
"node_id": "MDQ6VXNlcjI0NjA0MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brchristian",
"html_url": "https://github.com/brchristian",
"followers_url": "https://api.github.com/users/brchristian/followers",
"following_url": "https://api.github.com/users/brchristian/following{/other_user}",
"gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brchristian/subscriptions",
"organizations_url": "https://api.github.com/users/brchristian/orgs",
"repos_url": "https://api.github.com/users/brchristian/repos",
"events_url": "https://api.github.com/users/brchristian/events{/privacy}",
"received_events_url": "https://api.github.com/users/brchristian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-08-26T22:13:11 | 2025-08-26T22:13:11 | null |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7750",
"html_url": "https://github.com/huggingface/datasets/pull/7750",
"diff_url": "https://github.com/huggingface/datasets/pull/7750.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7750.patch",
"merged_at": null
}
|
In `src/datasets/load.py`, we can use unpacking rather than concatenating two lists for improved time and memory performance. It’s a small improvement in absolute terms, but a consistent and measurable one:
```diff
- ALL_ALLOWED_EXTENSIONS = list(_EXTENSION_TO_MODULE.keys()) + [".zip"]
+ ALL_ALLOWED_EXTENSIONS = [*_EXTENSION_TO_MODULE.keys(), ".zip"]
```
Benchmarking shows approximately 32.3% time improvement and 30.6% memory improvement.
Example benchmarking script:
```python
#!/usr/bin/env python3
"""
Benchmark script to test performance of list(_EXTENSION_TO_MODULE.keys()) vs [*_EXTENSION_TO_MODULE.keys()]
"""
import time
import tracemalloc
from statistics import mean, stdev
# Simulate _EXTENSION_TO_MODULE - based on actual size from datasets
_EXTENSION_TO_MODULE = {
f".ext{i}": f"module{i}" for i in range(20) # Realistic size
}
def method_old():
"""Current implementation using list()"""
return list(_EXTENSION_TO_MODULE.keys()) + [".zip"]
def method_new():
"""Proposed implementation using unpacking"""
return [*_EXTENSION_TO_MODULE.keys(), ".zip"]
def benchmark_time(func, iterations=100000):
"""Benchmark execution time"""
times = []
for _ in range(10): # Multiple runs for accuracy
start = time.perf_counter()
for _ in range(iterations):
func()
end = time.perf_counter()
times.append((end - start) / iterations * 1_000_000) # microseconds
return mean(times), stdev(times)
def benchmark_memory(func):
"""Benchmark peak memory usage"""
tracemalloc.start()
func()
current, peak = tracemalloc.get_traced_memory()
tracemalloc.stop()
return peak
if __name__ == "__main__":
print("Benchmarking list() vs unpacking performance...\n")
# Time benchmarks
old_time, old_std = benchmark_time(method_old)
new_time, new_std = benchmark_time(method_new)
print(f"Time Performance (µs per operation):")
print(f" list() approach: {old_time:.3f} ± {old_std:.3f}")
print(f" unpacking approach: {new_time:.3f} ± {new_std:.3f}")
print(f" Improvement: {((old_time - new_time) / old_time * 100):.1f}% faster")
# Memory benchmarks
old_mem = benchmark_memory(method_old)
new_mem = benchmark_memory(method_new)
print(f"\nMemory Usage (bytes):")
print(f" list() approach: {old_mem}")
print(f" unpacking approach: {new_mem}")
print(f" Reduction: {old_mem - new_mem} bytes ({((old_mem - new_mem) / old_mem * 100):.1f}% less)")
# Verify identical results
assert method_old() == method_new(), "Results should be identical!"
print(f"\n✓ Both methods produce identical results")
```
Results:
```
Benchmarking list() vs unpacking performance...
Time Performance (µs per operation):
list() approach: 0.213 ± 0.020
unpacking approach: 0.144 ± 0.002
Improvement: 32.3% faster
Memory Usage (bytes):
list() approach: 392
unpacking approach: 272
Reduction: 120 bytes (30.6% less)
✓ Both methods produce identical results
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7750/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7749
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7749/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7749/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7749/events
|
https://github.com/huggingface/datasets/pull/7749
| 3,356,567,923 |
PR_kwDODunzps6lddDW
| 7,749 |
Fix typo in error message for cache directory deletion
|
{
"login": "brchristian",
"id": 2460418,
"node_id": "MDQ6VXNlcjI0NjA0MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brchristian",
"html_url": "https://github.com/brchristian",
"followers_url": "https://api.github.com/users/brchristian/followers",
"following_url": "https://api.github.com/users/brchristian/following{/other_user}",
"gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brchristian/subscriptions",
"organizations_url": "https://api.github.com/users/brchristian/orgs",
"repos_url": "https://api.github.com/users/brchristian/repos",
"events_url": "https://api.github.com/users/brchristian/events{/privacy}",
"received_events_url": "https://api.github.com/users/brchristian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-08-26T17:47:22 | 2025-09-12T15:43:08 | 2025-09-12T13:22:18 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7749",
"html_url": "https://github.com/huggingface/datasets/pull/7749",
"diff_url": "https://github.com/huggingface/datasets/pull/7749.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7749.patch",
"merged_at": "2025-09-12T13:22:18"
}
|
This PR fixes a small typo in an error message in `src/datasets/fingerprint.py`:
https://github.com/huggingface/datasets/blob/910fab20606893f69b4fccac5fcc883dddf5a14d/src/datasets/fingerprint.py#L63
```diff
- occured
+ occurred
```
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7749/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7748
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7748/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7748/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7748/events
|
https://github.com/huggingface/datasets/pull/7748
| 3,347,137,663 |
PR_kwDODunzps6k-adX
| 7,748 |
docs: Streaming best practices
|
{
"login": "Abdul-Omira",
"id": 32625230,
"node_id": "MDQ6VXNlcjMyNjI1MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdul-Omira",
"html_url": "https://github.com/Abdul-Omira",
"followers_url": "https://api.github.com/users/Abdul-Omira/followers",
"following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions",
"organizations_url": "https://api.github.com/users/Abdul-Omira/orgs",
"repos_url": "https://api.github.com/users/Abdul-Omira/repos",
"events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdul-Omira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[] | 2025-08-23T00:18:43 | 2025-09-07T02:33:36 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7748",
"html_url": "https://github.com/huggingface/datasets/pull/7748",
"diff_url": "https://github.com/huggingface/datasets/pull/7748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7748.patch",
"merged_at": null
}
|
Add a new 'Streaming best practices' page with practical patterns and pitfalls for large-scale/production use of IterableDataset. Includes examples for batched map with remove_columns, deterministic shuffling with set_epoch, multi-worker sharding, checkpoint/resume, and persistence to Parquet/Hub. Linked from How-to > General usage, next to Stream.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7748/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7747
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7747/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7747/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7747/events
|
https://github.com/huggingface/datasets/pull/7747
| 3,347,098,038 |
PR_kwDODunzps6k-Rtd
| 7,747 |
Add wikipedia-2023-redirects dataset
|
{
"login": "Abdul-Omira",
"id": 32625230,
"node_id": "MDQ6VXNlcjMyNjI1MjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32625230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdul-Omira",
"html_url": "https://github.com/Abdul-Omira",
"followers_url": "https://api.github.com/users/Abdul-Omira/followers",
"following_url": "https://api.github.com/users/Abdul-Omira/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdul-Omira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdul-Omira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdul-Omira/subscriptions",
"organizations_url": "https://api.github.com/users/Abdul-Omira/orgs",
"repos_url": "https://api.github.com/users/Abdul-Omira/repos",
"events_url": "https://api.github.com/users/Abdul-Omira/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdul-Omira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"you should host this dataset on HF with `ds.push_to_hub()` ! we stopped using dataset scripts some time ago"
] | 2025-08-22T23:49:53 | 2025-09-12T13:23:34 | null |
NONE
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7747",
"html_url": "https://github.com/huggingface/datasets/pull/7747",
"diff_url": "https://github.com/huggingface/datasets/pull/7747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7747.patch",
"merged_at": null
}
|
Title: Add wikipedia-2023-redirects dataset (redirect resolution + pageviews)
Summary
- New dataset loader: wikipedia_2023_redirects
- Canonical Wikipedia pages enriched with:
- redirects (aliases pointing to the page)
- 2023 pageviews (aggregated)
- Streaming support; robust parsing; license notes included
- Tests with tiny dummy data (XML + TSVs); covers streaming
Motivation
RAG/retrieval often benefits from:
- Query expansion via redirect aliases
- Popularity prior via pageviews
This loader offers a practical, maintenance-light way to access canonical pages alongside their redirect aliases and 2023 pageview totals.
Features
- id: string
- title: string
- url: string
- text: string
- redirects: list[string]
- pageviews_2023: int32
- timestamp: string
Licensing
- Wikipedia text: CC BY-SA 3.0 (attribution and share-alike apply)
- Pageviews: public domain
The PR docs mention both, and the module docstring cites sources.
Notes
- The URLs in _get_urls_for_config are wired to dummy files for tests. In production, these would point to Wikimedia dumps:
- XML page dumps: https://dumps.wikimedia.org/
- Pageviews: https://dumps.wikimedia.org/other/pageviews/
- The schema is intentionally simple and stable. Pageview aggregation is per-title sum across 2023.
Testing
- make style && make quality
- pytest -q tests/test_dataset_wikipedia_2023_redirects.py
Example
```python
from datasets import load_dataset
ds = load_dataset("wikipedia_2023_redirects", split="train")
print(ds[0]["title"], ds[0]["redirects"][:5], ds[0]["pageviews_2023"])
```
Acknowledgements
- Wikipedia/Wikimedia Foundation for the source data
- Hugging Face Datasets for the dataset infrastructure
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7747/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7746
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7746/events
|
https://github.com/huggingface/datasets/issues/7746
| 3,345,391,211 |
I_kwDODunzps7HZp5r
| 7,746 |
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
|
{
"login": "Awesome075",
"id": 187888489,
"node_id": "U_kgDOCzLzaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Awesome075",
"html_url": "https://github.com/Awesome075",
"followers_url": "https://api.github.com/users/Awesome075/followers",
"following_url": "https://api.github.com/users/Awesome075/following{/other_user}",
"gists_url": "https://api.github.com/users/Awesome075/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Awesome075/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Awesome075/subscriptions",
"organizations_url": "https://api.github.com/users/Awesome075/orgs",
"repos_url": "https://api.github.com/users/Awesome075/repos",
"events_url": "https://api.github.com/users/Awesome075/events{/privacy}",
"received_events_url": "https://api.github.com/users/Awesome075/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"@sayakpaul @a-r-r-o-w could you verify this issue then i can contribute to solve this issue!😊"
] | 2025-08-22T12:52:03 | 2025-08-27T20:23:35 | null |
NONE
| null | null | null | null |
Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.
The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed.
I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly:
**[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)**
Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository.
This action would fix the dataset for all users and ensure its continued availability.
Thank you!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7746/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7745
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7745/events
|
https://github.com/huggingface/datasets/issues/7745
| 3,345,286,773 |
I_kwDODunzps7HZQZ1
| 7,745 |
Audio mono argument no longer supported, despite class documentation
|
{
"login": "jheitz",
"id": 5666041,
"node_id": "MDQ6VXNlcjU2NjYwNDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitz",
"html_url": "https://github.com/jheitz",
"followers_url": "https://api.github.com/users/jheitz/followers",
"following_url": "https://api.github.com/users/jheitz/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitz/subscriptions",
"organizations_url": "https://api.github.com/users/jheitz/orgs",
"repos_url": "https://api.github.com/users/jheitz/repos",
"events_url": "https://api.github.com/users/jheitz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"I want to solve this problem can you please assign it to me\nand also can you please guide whether the mono parameter is required to be re-added or the documentation needs an update?"
] | 2025-08-22T12:15:41 | 2025-08-24T18:22:41 | null |
NONE
| null | null | null | null |
### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class documentation, is says:
Args:
sampling_rate (`int`, *optional*):
Target sampling rate. If `None`, the native sampling rate is used.
mono (`bool`, defaults to `True`):
Whether to convert the audio signal to mono by averaging samples across
channels.
[...]
### Expected behavior
The above call should either work, or the documentation within the Audio class should be updated
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7745/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7744
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7744/events
|
https://github.com/huggingface/datasets/issues/7744
| 3,343,510,686 |
I_kwDODunzps7HSeye
| 7,744 |
dtype: ClassLabel is not parsed correctly in `features.py`
|
{
"login": "cmatKhan",
"id": 43553003,
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmatKhan",
"html_url": "https://github.com/cmatKhan",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"I think it's \"class_label\"",
"> I think it's \"class_label\"\n\nI see -- thank you. This works\n\n```yaml\nlicense: mit\nlanguage:\n- en\ntags:\n- genomics\n- yeast\n- transcription\n- perturbation\n- response\n- overexpression\npretty_name: Hackett, 2020 Overexpression\nsize_categories:\n- 1M<n<10M\ndataset_info:\n features:\n ...\n - name: mechanism\n dtype:\n class_label:\n names: [\"GEV\", \"ZEV\"]\n description: induction system (GEV or ZEV)\n - name: restriction\n dtype:\n class_label:\n names: [\"M\", \"N\", \"P\"]\n description: nutrient limitation (M, N or P)\n```\n\nI see the documentation for [datasets.ClassLabel](https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.ClassLabel). And the documentation for the [dataset cards](https://huggingface.co/docs/hub/en/datasets-cards). I don't see anything in either of those places, though, that specifies the pattern above.\n\nI suppose rather than writing the yaml by hand, the expected workflow is to use `datasets` to construct these features?",
"I generally copy/paste and adapt a YAML from another dataset.\n\nBut it's also possible to generate it from `datasets` like that\n\n```python\n>>> import yaml\n>>> print(yaml.dump(features._to_yaml_list(), sort_keys=False))\n- name: start\n dtype: int32\n- name: end\n dtype: int32\n- name: restriction\n dtype:\n class_label:\n names: [\"M\", \"N\", \"P\"]\n```"
] | 2025-08-21T23:28:50 | 2025-09-10T15:23:41 | 2025-09-10T15:23:41 |
NONE
| null | null | null | null |
`dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error):
```yaml
license: mit
pretty_name: BrentLab Yeast Genome Resources
size_categories:
- 1K<n<10K
language:
- en
dataset_info:
features:
- name: start
dtype: int32
description: Start coordinate (1-based, **inclusive**)
- name: end
dtype: int32
description: End coordinate (1-based, **inclusive**)
- name: strand
dtype: ClassLabel
...
```
is producing the following error in the data viewer:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
return HubDatasetModuleFactory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
I think that this is caused by this line
https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013
Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py)
```python
import itertools
import os
import re
_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
_single_underscore_re = re.compile(r"(?<!_)_(?!_)")
_multiple_underscores_re = re.compile(r"(_{2,})")
_split_re = r"^\w+(\.\w+)*$"
def snakecase_to_camelcase(name):
"""Convert snake-case string to camel-case string."""
name = _single_underscore_re.split(name)
name = [_multiple_underscores_re.split(n) for n in name]
return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
snakecase_to_camelcase("ClassLabel")
```
Result:
```raw
'Classlabel'
```
|
{
"login": "cmatKhan",
"id": 43553003,
"node_id": "MDQ6VXNlcjQzNTUzMDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmatKhan",
"html_url": "https://github.com/cmatKhan",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/cmatKhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmatKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmatKhan/subscriptions",
"organizations_url": "https://api.github.com/users/cmatKhan/orgs",
"repos_url": "https://api.github.com/users/cmatKhan/repos",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmatKhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7744/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7743
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7743/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7743/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7743/events
|
https://github.com/huggingface/datasets/pull/7743
| 3,342,611,297 |
PR_kwDODunzps6ku8Jw
| 7,743 |
Refactor HDF5 and preserve tree structure
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false | null |
[] | null |
[
"@lhoestq this is ready for you now!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7743). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-08-21T17:28:17 | 2025-08-26T15:28:05 | 2025-08-26T15:28:05 |
CONTRIBUTOR
| null | null | false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7743",
"html_url": "https://github.com/huggingface/datasets/pull/7743",
"diff_url": "https://github.com/huggingface/datasets/pull/7743.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7743.patch",
"merged_at": "2025-08-26T15:28:05"
}
|
Closes #7741. Followup to #7690
- Recursive parsing and feature inference, to preserve the tree structure of the file. Note this means we now visit all links in the file. It also means we have to call` combine_chunks` on any large non-root datasets.
- Support for `complex64` (two `float32`s, used to be converted to two `float64`s)
- Support for ndim complex, compound, more field types for compound (due to reusing the main parser, compound types are treated like groups)
- Cleaned up varlen support
- Always do feature inference and always cast to features (used to cast to schema)
- Updated tests to use `load_dataset` instead of internal APIs
- Removed `columns` in config. Have to give Features (i.e., must specify types) if filtering
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7743/timeline
| null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7742
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7742/events
|
https://github.com/huggingface/datasets/issues/7742
| 3,336,704,928 |
I_kwDODunzps7G4hOg
| 7,742 |
module 'pyarrow' has no attribute 'PyExtensionType'
|
{
"login": "mnedelko",
"id": 6106392,
"node_id": "MDQ6VXNlcjYxMDYzOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnedelko",
"html_url": "https://github.com/mnedelko",
"followers_url": "https://api.github.com/users/mnedelko/followers",
"following_url": "https://api.github.com/users/mnedelko/following{/other_user}",
"gists_url": "https://api.github.com/users/mnedelko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnedelko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnedelko/subscriptions",
"organizations_url": "https://api.github.com/users/mnedelko/orgs",
"repos_url": "https://api.github.com/users/mnedelko/repos",
"events_url": "https://api.github.com/users/mnedelko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnedelko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false | null |
[] | null |
[
"Just checked out the files and thishad already been addressed",
"For others who find this issue: \n\n`pip install --upgrade \"datasets>=2.20.0\"` \n\nfrom https://github.com/explodinggradients/ragas/issues/2170#issuecomment-3204393672 can fix it."
] | 2025-08-20T06:14:33 | 2025-09-09T02:51:46 | null |
NONE
| null | null | null | null |
### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will proceed to submit a PR with the below fix:
**Issue Reason**
The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The
PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions.
** Issue Solution**
Making the following changes to the following lib files should temporarily resolve the issue.
I will submit a PR to the dataets library in the meantime.
env_name/lib/python3.10/site-packages/datasets/features/features.py:
```
> 521 self.shape = tuple(shape)
522 self.value_type = dtype
523 self.storage_dtype = self._generate_dtype(self.value_type)
524 - pa.PyExtensionType.__init__(self, self.storage_dtype)
524 + pa.ExtensionType.__init__(self, self.storage_dtype)
525
526 def __reduce__(self):
527 return self.__class__, (
```
Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py:
```
510 _type: str = field(default=“Array5D”, init=False, repr=False)
511
512
513 - class _ArrayXDExtensionType(pa.PyExtensionType):
513 + class _ArrayXDExtensionType(pa.ExtensionType):
514 ndims: Optional[int] = None
515
516 def __init__(self, shape: tuple, dtype: str):
```
### Steps to reproduce the bug
Ragas version: 0.3.1
Python version: 3.11
**Code to Reproduce**
_**In notebook:**_
!pip install ragas
from ragas import evaluate
### Expected behavior
The required package installs without issue.
### Environment info
In Jupyter Notebook.
venv
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7742/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
https://api.github.com/repos/huggingface/datasets/issues/7741
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7741/events
|
https://github.com/huggingface/datasets/issues/7741
| 3,334,848,656 |
I_kwDODunzps7GxcCQ
| 7,741 |
Preserve tree structure when loading HDF5
|
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] | null |
[] | 2025-08-19T15:42:05 | 2025-08-26T15:28:06 | 2025-08-26T15:28:06 |
CONTRIBUTOR
| null | null | null | null |
### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user.
### Your contribution
I'll open a PR (#7743)
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7741/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 15