Dataset Viewer
Auto-converted to Parquet Duplicate
url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
3.67B
node_id
stringlengths
18
24
number
int64
2
7.88k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 18:18:51
2025-11-26 16:16:56
updated_at
timestamp[s]date
2020-04-29 09:23:05
2025-11-30 03:52:07
closed_at
timestamp[s]date
2020-04-29 09:23:05
2025-11-21 12:31:19
author_association
stringclasses
4 values
type
null
active_lock_reason
null
draft
null
pull_request
null
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
1 class
closed_at_time_taken
duration[s]
https://api.github.com/repos/huggingface/datasets/issues/7883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7883/comments
https://api.github.com/repos/huggingface/datasets/issues/7883/events
https://github.com/huggingface/datasets/issues/7883
3,668,182,561
I_kwDODunzps7apAYh
7,883
Data.to_csv() cannot be recognized by pylance
{ "avatar_url": "https://avatars.githubusercontent.com/u/154290630?v=4", "events_url": "https://api.github.com/users/xi4ngxin/events{/privacy}", "followers_url": "https://api.github.com/users/xi4ngxin/followers", "following_url": "https://api.github.com/users/xi4ngxin/following{/other_user}", "gists_url": "https://api.github.com/users/xi4ngxin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xi4ngxin", "id": 154290630, "login": "xi4ngxin", "node_id": "U_kgDOCTJJxg", "organizations_url": "https://api.github.com/users/xi4ngxin/orgs", "received_events_url": "https://api.github.com/users/xi4ngxin/received_events", "repos_url": "https://api.github.com/users/xi4ngxin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xi4ngxin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xi4ngxin/subscriptions", "type": "User", "url": "https://api.github.com/users/xi4ngxin", "user_view_type": "public" }
[]
open
false
null
[]
[]
2025-11-26T16:16:56
2025-11-26T16:16:56
null
NONE
null
null
null
null
### Describe the bug Hi, everyone ! I am a beginner with datasets. I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV. Intermediate results: ``` Generating train split: 62973 examples [00:00, 175939.01 examples/s] DatasetDict({ train: Dataset({ features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', ' 对方钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'], num_rows: 62973 }) }) ``` However, Pylance gives me the following error: ``` Cannot access attribute "to_csv" for class "DatasetDict" Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)``` Cannot access attribute "to_csv" for class "IterableDatasetDict" Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md) (method) to_csv: Unknown | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, num_proc: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int) | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int) ``` I ignored the error and continued executing to get the correct result: ``` Dataset({ features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', '对方 钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'], num_rows: 62973 }) ``` Since the data volume is small, I manually merged the CSV files, and the final result is consistent with what the program saved. looks like : <img width="1264" height="150" alt="Image" src="https://github.com/user-attachments/assets/743540d7-ad8c-4531-ae7e-de71a5243a32" /> ### Steps to reproduce the bug this is my code. ``` from datasets import load_dataset def main(): url = "data/test.zip" data_files = {"train": url} dataset = load_dataset("csv", data_files=data_files,split="train", encoding="gbk", skiprows=2) # print(dataset) dataset.to_csv("data/test.csv") if __name__ == "__main__": main() ``` ### Expected behavior I want to know why this happens. Is there something wrong with my code? ### Environment info OS: Windows 11 **upgrade from** OS: Windows_NT x64 10.0.22631 Editor: VS Code Version: 1.106.2 (user setup) "datasets" version = "4.4.1"
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7883/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7883/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7882/comments
https://api.github.com/repos/huggingface/datasets/issues/7882/events
https://github.com/huggingface/datasets/issues/7882
3,667,664,527
I_kwDODunzps7anB6P
7,882
Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6270922?v=4", "events_url": "https://api.github.com/users/Oligou/events{/privacy}", "followers_url": "https://api.github.com/users/Oligou/followers", "following_url": "https://api.github.com/users/Oligou/following{/other_user}", "gists_url": "https://api.github.com/users/Oligou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Oligou", "id": 6270922, "login": "Oligou", "node_id": "MDQ6VXNlcjYyNzA5MjI=", "organizations_url": "https://api.github.com/users/Oligou/orgs", "received_events_url": "https://api.github.com/users/Oligou/received_events", "repos_url": "https://api.github.com/users/Oligou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Oligou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Oligou/subscriptions", "type": "User", "url": "https://api.github.com/users/Oligou", "user_view_type": "public" }
[]
open
false
null
[]
[]
2025-11-26T14:06:02
2025-11-26T14:06:02
null
NONE
null
null
null
null
### Describe the bug Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library. - xet-hosted files load fine - LFS-hosted files sometimes fail Example: - Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet - Works: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-42/000_00027.parquet Discussion: https://huggingface.co/datasets/epfml/FineWeb-HQ/discussions/2 ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset( "epfml/FineWeb-HQ", data_files="data/CC-MAIN-2024-26/000_00003.parquet", ) ``` Error message: ``` HfHubHTTPError: 403 Forbidden: None. Cannot access content at: https://cdn-lfs-us-1.hf.co/repos/... Make sure your token has the correct permissions. ... <Error><Code>AccessDenied</Code><Message>Access Denied</Message></Error> ``` ### Expected behavior It should load the dataset for all files. ### Environment info - python 3.10 - datasets 4.4.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7882/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7882/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7880/comments
https://api.github.com/repos/huggingface/datasets/issues/7880/events
https://github.com/huggingface/datasets/issues/7880
3,667,561,864
I_kwDODunzps7amo2I
7,880
Spurious label column created when audiofolder/imagefolder directories match split names
{ "avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4", "events_url": "https://api.github.com/users/neha222222/events{/privacy}", "followers_url": "https://api.github.com/users/neha222222/followers", "following_url": "https://api.github.com/users/neha222222/following{/other_user}", "gists_url": "https://api.github.com/users/neha222222/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neha222222", "id": 132138786, "login": "neha222222", "node_id": "U_kgDOB-BHIg", "organizations_url": "https://api.github.com/users/neha222222/orgs", "received_events_url": "https://api.github.com/users/neha222222/received_events", "repos_url": "https://api.github.com/users/neha222222/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neha222222/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neha222222/subscriptions", "type": "User", "url": "https://api.github.com/users/neha222222", "user_view_type": "public" }
[]
open
false
null
[]
[]
2025-11-26T13:36:24
2025-11-26T13:36:24
null
NONE
null
null
null
null
## Describe the bug When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created. **Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4 ``` from datasets import load_dataset ds = load_dataset("datasets-examples/doc-audio-4") print(ds["train"].features) ``` Shows 'label' column with ClassLabel(names=['test', 'train']) - incorrect!## Root cause In `folder_based_builder.py`, the `labels` set is accumulated across ALL splits (line 77). When directories are `train/` and `test/`: - `labels = {"train", "test"}` → `len(labels) > 1` → `add_labels = True` - Spurious label column is created with split names as class labels ## Expected behavior No `label` column should be added when directory names match split names. ## Proposed fix Skip label inference when inferred labels match split names. cc @lhoestq
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7880/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7880/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7879/comments
https://api.github.com/repos/huggingface/datasets/issues/7879/events
https://github.com/huggingface/datasets/issues/7879
3,657,249,446
I_kwDODunzps7Z_TKm
7,879
python core dump when downloading dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5960219?v=4", "events_url": "https://api.github.com/users/hansewetz/events{/privacy}", "followers_url": "https://api.github.com/users/hansewetz/followers", "following_url": "https://api.github.com/users/hansewetz/following{/other_user}", "gists_url": "https://api.github.com/users/hansewetz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hansewetz", "id": 5960219, "login": "hansewetz", "node_id": "MDQ6VXNlcjU5NjAyMTk=", "organizations_url": "https://api.github.com/users/hansewetz/orgs", "received_events_url": "https://api.github.com/users/hansewetz/received_events", "repos_url": "https://api.github.com/users/hansewetz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hansewetz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hansewetz/subscriptions", "type": "User", "url": "https://api.github.com/users/hansewetz", "user_view_type": "public" }
[]
open
false
null
[]
[ "Hi @hansewetz I'm curious, for me it works just fine. Are you still observing the issue?", "Yup ... still the same issue.\nHowever, after adding a ```sleep(1)``` call after the ``` for``` loop by accident during debugging, the program terminates properly (not a good solution though ... :-) ).\nAre there some threads created that handles the download that are still running when the program exits?\nHaven't had time yet to go through the code in ```iterable_dataset.py::IterableDataset```\n", "Interesting, I was able to reproduce it, on a jupyter notebook the code runs just fine, as a Python script indeed it seems to never finish running (which is probably leading to the core dumped error). I'll try and take a look at the source code as well to see if I can figure it out.", "Hi @hansewetz ,\nIf possible can I be assigned with this issue?\n\n", "```If possible can I be assigned with this issue?```\nHi, I don't know how assignments work here and who can take decisions about assignments ... ", "Hi @hansewetz and @Aymuos22, I have made some progress:\n\n1) Confirmed last working version is 3.1.0\n\n2) From 3.1.0 to 3.2.0, there was a change in how parquet files are read (see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py/#168).\n\nThe issue seems to be the following code:\n\n```\nparquet_fragment.to_batches(\n batch_size=batch_size,\n columns=self.config.columns,\n filter=filter_expr,\n batch_readahead=0,\n fragment_readahead=0,\n )\n```\n\nAdding a `use_threads=False` parameter to the `to_batches` call solves the bug. However, this seems far from an optimal solution, since we'd like to be able to use multiple threads for reading the fragments. \n\nI'll keep investigating to see if there's a better solution.", "Hi @lhoestq, may I ask if the current behaviour was expected by you folks and you don't think it needs solving, or should I keep on investigating a compromise between using multithreading / avoid unexpected behaviour? Thanks in advance :) ", "Having the same issue. the code never stops executing. Using datasets 4.4.1\nTried with \"islice\" as well. When the streaming flag is True, the code doesn't end execution. On vs-code.", "The issue on pyarrow side is here: https://github.com/apache/arrow/issues/45214 and the original issue in `datasets` here: https://github.com/huggingface/datasets/issues/7357\n\nIt would be cool to have a fix on the pyarrow side", "Thank you very much @lhoestq, I'm reading the issue thread in pyarrow and realizing you've been raising awareness around this for a long time now. When I have some time I'll look at @pitrou's PR to see if I can get a better understanding of what's going on on pyarrow. " ]
2025-11-24T06:22:53
2025-11-25T20:45:55
null
NONE
null
null
null
null
### Describe the bug When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting: ``` terminate called without an active exception Aborted (core dumped) ``` Tested with python 3.12.3, python 3.9.21 ### Steps to reproduce the bug Create python venv: ```bash python -m venv venv ./venv/bin/activate pip install datasets==4.4.1 ``` Execute the following program: ``` from datasets import load_dataset ds = load_dataset("HuggingFaceFW/fineweb-2", 'hrv_Latn', split="test", streaming=True) for sample in ds: break ``` ### Expected behavior Clean program exit ### Environment info described above **note**: the example works correctly when using ```datasets==3.1.0```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7879/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7879/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7877/comments
https://api.github.com/repos/huggingface/datasets/issues/7877/events
https://github.com/huggingface/datasets/issues/7877
3,652,906,788
I_kwDODunzps7Zuu8k
7,877
work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[]
open
false
null
[]
[ "Hi! Just created a Pull Request (#7890) to try to fix this using your suggestions. I hope it helps!" ]
2025-11-21T19:51:48
2025-11-29T20:37:42
null
CONTRIBUTOR
null
null
null
null
This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage` However, the `tempfile` facility that `datasets` and `pyarrow` use is somewhat broken. If the path doesn't exist it'd ignore it and fall back to using `/tmp`. Watch this: ``` $ export TMPDIR='/tmp/username' $ python -c "\ import os import tempfile print(os.environ['TMPDIR']) print(tempfile.gettempdir())" /tmp/username /tmp ``` Now let's ensure the path exists: ``` $ export TMPDIR='/tmp/username' $ mkdir -p $TMPDIR $ python -c "\ import os import tempfile print(os.environ['TMPDIR']) print(tempfile.gettempdir())" /tmp/username /tmp/username ``` So I recommend `datasets` do either of the 2: 1. assert if `$TMPDIR` dir doesn't exist, telling the user to create it 2. auto-create it The reason for (1) is that I don't know why `tempdir` doesn't auto-create the dir - perhaps some security implication? I will let you guys make the decision, but the key is not to let things silently fall through and the user puzzling why no matter what they do they can't break past `No space left on device` while using `datasets` Thank you. I found this via https://stackoverflow.com/questions/37229398/python-tempfile-gettempdir-does-not-respect-tmpdir while trying to help a colleague to solve this exact issue.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7877/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7877/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7872/comments
https://api.github.com/repos/huggingface/datasets/issues/7872/events
https://github.com/huggingface/datasets/issues/7872
3,643,681,893
I_kwDODunzps7ZLixl
7,872
IterableDataset does not use features information in to_pandas
{ "avatar_url": "https://avatars.githubusercontent.com/u/790640?v=4", "events_url": "https://api.github.com/users/bonext/events{/privacy}", "followers_url": "https://api.github.com/users/bonext/followers", "following_url": "https://api.github.com/users/bonext/following{/other_user}", "gists_url": "https://api.github.com/users/bonext/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bonext", "id": 790640, "login": "bonext", "node_id": "MDQ6VXNlcjc5MDY0MA==", "organizations_url": "https://api.github.com/users/bonext/orgs", "received_events_url": "https://api.github.com/users/bonext/received_events", "repos_url": "https://api.github.com/users/bonext/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bonext/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bonext/subscriptions", "type": "User", "url": "https://api.github.com/users/bonext", "user_view_type": "public" }
[]
open
false
null
[]
[ "Created A PR!", "Another test script that can be used to test the behavior - \n\n```\nimport datasets\nfrom datasets import features\n\ndef test_crash():\n common_features = features.Features({\n \"a\": features.Value(\"int64\"),\n \"b\": features.List({\"c\": features.Value(\"int64\")}),\n })\n\n def row_generator():\n yield {\"a\": 1, \"b\": []}\n yield {\"a\": 1, \"b\": [{\"c\": 1}]}\n\n d = datasets.IterableDataset.from_generator(row_generator, features=common_features)\n\n list(d.to_pandas()) # <-- this triggers the crash\n\n```" ]
2025-11-19T17:12:59
2025-11-19T18:52:14
null
NONE
null
null
null
null
### Describe the bug `IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values. ### Steps to reproduce the bug ```python import datasets from datasets import features def test_to_pandas_works_with_explicit_schema(): common_features = features.Features( { "a": features.Value("int64"), "b": features.List({"c": features.Value("int64")}), } ) def row_generator(): data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}] for row in data: yield row d = datasets.IterableDataset.from_generator(row_generator, features=common_features) for _ in d.to_pandas(): pass # _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ # .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:3703: in to_pandas # table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000))) # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2563: in iter # for key, pa_table in iterator: # ^^^^^^^^ # .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2078: in _iter_arrow # for key, pa_table in self.ex_iterable._iter_arrow(): # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:599: in _iter_arrow # yield new_key, pa.Table.from_batches(chunks_buffer) # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # pyarrow/table.pxi:5039: in pyarrow.lib.Table.from_batches # ??? # pyarrow/error.pxi:155: in pyarrow.lib.pyarrow_internal_check_status # ??? # _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ # > ??? # E pyarrow.lib.ArrowInvalid: Schema at index 1 was different: # E a: int64 # E b: list<item: null> # E vs # E a: int64 # E b: list<item: struct<c: int64>> # pyarrow/error.pxi:92: ArrowInvalid ``` ### Expected behavior arrow operations use schema provided through `features=` and not the one inferred from the data ### Environment info - datasets version: 4.4.1 - Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O - Python version: 3.13.1 - huggingface_hub version: 1.1.4 - PyArrow version: 22.0.0 - Pandas version: 2.3.3 - fsspec version: 2025.10.0
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7872/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7872/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7871/comments
https://api.github.com/repos/huggingface/datasets/issues/7871/events
https://github.com/huggingface/datasets/issues/7871
3,643,607,371
I_kwDODunzps7ZLQlL
7,871
Reqwest Error: HTTP status client error (429 Too Many Requests)
{ "avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4", "events_url": "https://api.github.com/users/yanan1116/events{/privacy}", "followers_url": "https://api.github.com/users/yanan1116/followers", "following_url": "https://api.github.com/users/yanan1116/following{/other_user}", "gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yanan1116", "id": 26405281, "login": "yanan1116", "node_id": "MDQ6VXNlcjI2NDA1Mjgx", "organizations_url": "https://api.github.com/users/yanan1116/orgs", "received_events_url": "https://api.github.com/users/yanan1116/received_events", "repos_url": "https://api.github.com/users/yanan1116/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions", "type": "User", "url": "https://api.github.com/users/yanan1116", "user_view_type": "public" }
[]
open
false
null
[]
[ "the dataset repo: `https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim`", "Hi @yanan1116,\n\nThanks for the detailed report! However, this issue was filed in the wrong repository. This is a `huggingface_hub` issue, not a `datasets` issue.\n\nLooking at your traceback, you're using the `hf download` CLI command (from `huggingface_hub`), and the error occurs in `huggingface_hub/file_download.py` at line 571 in the `xet_get` function. The `datasets` library is not involved in this download at all.\n\nThe 429 error means the CAS (Content Addressable Storage) service at `https://cas-server.xethub.hf.co` is rate-limiting your requests. The `huggingface_hub` library currently doesn't have automatic retry logic for 429 errors from the CAS service.\n\nPlease reopen this issue at: https://github.com/huggingface/huggingface_hub/issues" ]
2025-11-19T16:52:24
2025-11-30T03:32:00
null
NONE
null
null
null
null
### Describe the bug full error message: ``` Traceback (most recent call last): File "/home/yanan/miniconda3/bin/hf", line 7, in <module> sys.exit(main()) ~~~~^^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main app() ~~~^^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 327, in __call__ raise e File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 310, in __call__ return get_command(self)(*args, **kwargs) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1161, in __call__ return self.main(*args, **kwargs) ~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 803, in main return _main( self, ...<6 lines>... **extra, ) File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 192, in _main rv = self.invoke(ctx) File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1697, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1443, in invoke return ctx.invoke(self.callback, **ctx.params) ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 788, in invoke return __callback(*args, **kwargs) File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 691, in wrapper return callback(**use_params) File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 188, in download _print_result(run_download()) ~~~~~~~~~~~~^^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 149, in run_download return snapshot_download( repo_id=repo_id, ...<10 lines>... dry_run=dry_run, ) File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn return fn(*args, **kwargs) File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 451, in snapshot_download thread_map( ~~~~~~~~~~^ _inner_hf_hub_download, ^^^^^^^^^^^^^^^^^^^^^^^ ...<3 lines>... tqdm_class=tqdm_class, ^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/std.py", line 1181, in __iter__ for obj in iterable: ^^^^^^^^ File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 619, in result_iterator yield _result_or_cancel(fs.pop()) ~~~~~~~~~~~~~~~~~^^^^^^^^^^ File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 317, in _result_or_cancel return fut.result(timeout) ~~~~~~~~~~^^^^^^^^^ File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 449, in result return self.__get_result() ~~~~~~~~~~~~~~~~~^^ File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/thread.py", line 59, in run result = self.fn(*self.args, **self.kwargs) File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 431, in _inner_hf_hub_download hf_hub_download( # type: ignore ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ repo_id, ^^^^^^^^ ...<14 lines>... dry_run=dry_run, ^^^^^^^^^^^^^^^^ ) ^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn return fn(*args, **kwargs) File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 986, in hf_hub_download return _hf_hub_download_to_local_dir( # Destination ...<16 lines>... dry_run=dry_run, ) File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1390, in _hf_hub_download_to_local_dir _download_to_tmp_and_move( ~~~~~~~~~~~~~~~~~~~~~~~~~^ incomplete_path=paths.incomplete_path(etag), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<8 lines>... tqdm_class=tqdm_class, ^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1791, in _download_to_tmp_and_move xet_get( ~~~~~~~^ incomplete_path=incomplete_path, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<4 lines>... tqdm_class=tqdm_class, ^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 571, in xet_get download_files( ~~~~~~~~~~~~~~^ xet_download_info, ^^^^^^^^^^^^^^^^^^ ...<3 lines>... progress_updater=[progress_updater], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ RuntimeError: Data processing error: CAS service error : Reqwest Error: HTTP status client error (429 Too Many Requests), domain: https://cas-server.xethub.hf.co/reconstructions/04b8a4667b84b3b874a6a2f070cec88920f6289e71185d69fa87e3cf29834710 ``` ### Steps to reproduce the bug my command ```bash hf download nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim --repo-type dataset --include "single_panda_gripper.CoffeePressButton/**" --local-dir /home/yanan/robotics/Isaac-GR00T/gr00t_dataset_official/ ``` ### Expected behavior expect the data can be downloaded without any issue ### Environment info huggingface_hub 1.1.4
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7871/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7871/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7870/comments
https://api.github.com/repos/huggingface/datasets/issues/7870/events
https://github.com/huggingface/datasets/issues/7870
3,642,209,953
I_kwDODunzps7ZF7ah
7,870
Visualization for Medical Imaging Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4", "events_url": "https://api.github.com/users/CloseChoice/events{/privacy}", "followers_url": "https://api.github.com/users/CloseChoice/followers", "following_url": "https://api.github.com/users/CloseChoice/following{/other_user}", "gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CloseChoice", "id": 31857876, "login": "CloseChoice", "node_id": "MDQ6VXNlcjMxODU3ODc2", "organizations_url": "https://api.github.com/users/CloseChoice/orgs", "received_events_url": "https://api.github.com/users/CloseChoice/received_events", "repos_url": "https://api.github.com/users/CloseChoice/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions", "type": "User", "url": "https://api.github.com/users/CloseChoice", "user_view_type": "public" }
[]
closed
false
null
[]
[ "It would be amazing to be able to show the Papaya UI in google colab / jupyter notebook. IIRC both allow serving javascript via nbextensions that we can surely use in HTML() objects.\n\nAlternatively we could also start with a simple approach and dump the medical image data as a video file that goes through the slices, so we don't need javascript." ]
2025-11-19T11:05:39
2025-11-21T12:31:19
2025-11-21T12:31:19
CONTRIBUTOR
null
null
null
null
This is a followup to: https://github.com/huggingface/datasets/pull/7815. I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found: - https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr: https://www.jsdelivr.com/package/npm/brainbrowser (but that is from 2015!) - https://github.com/rii-mango/Papaya, custom but BSD-style license that would require datasets to list the conditions in their readme somewhere, last commit June 2024. I looked into this library and it looks mature and good enough for our use case, but just working on it for a short time I wasn't able to get this to work, but am sure we could get this working, would probably require some JS on datasets' end. Available on jsdelivr as well: https://www.jsdelivr.com/package/npm/papaya-viewer. Seems like it's frequently loaded. - https://github.com/hanayik/niivue, BSD3 license, last commit May 26, 2021. Archived. Doesn't look like an option. I think the only real option for us Papaya, but there is also the risk that we'll end up with an unmaintained package after a while, since development seems to be slow or even halted. I think conceptually we would need to figure out how we can build a good solution for visualizing Medical Image data. On shap, we have a separate javascript folder in which we render visualizations, this could be a blueprint but will require a bundler, etc. Alternatively one could go with a naive approach to just write some html code in a python string and load the package via jsdelivr. @lhoestq thoughts?
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7870/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7870/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 1:25:40
https://api.github.com/repos/huggingface/datasets/issues/7869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7869/comments
https://api.github.com/repos/huggingface/datasets/issues/7869/events
https://github.com/huggingface/datasets/issues/7869
3,636,808,734
I_kwDODunzps7YxUwe
7,869
Why does dataset merge fail when tools have different parameters?
{ "avatar_url": "https://avatars.githubusercontent.com/u/116297296?v=4", "events_url": "https://api.github.com/users/hitszxs/events{/privacy}", "followers_url": "https://api.github.com/users/hitszxs/followers", "following_url": "https://api.github.com/users/hitszxs/following{/other_user}", "gists_url": "https://api.github.com/users/hitszxs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hitszxs", "id": 116297296, "login": "hitszxs", "node_id": "U_kgDOBu6OUA", "organizations_url": "https://api.github.com/users/hitszxs/orgs", "received_events_url": "https://api.github.com/users/hitszxs/received_events", "repos_url": "https://api.github.com/users/hitszxs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hitszxs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hitszxs/subscriptions", "type": "User", "url": "https://api.github.com/users/hitszxs", "user_view_type": "public" }
[]
open
false
null
[]
[ "Hi @hitszxs,\n This is indeed by design,\n\nThe `datasets` library is built on top of [Apache Arrow](https://arrow.apache.org/), which uses a **columnar storage format** with strict schema requirements. When you try to concatenate/merge datasets, the library checks if features can be aligned using the [`_check_if_features_can_be_aligned`](https://github.com/huggingface/datasets/blob/main/src/datasets/features/features.py#L2297-L2316) function.\n\nTwo datasets can be merged if:\n1. Columns with the same name have the **same type**, OR\n2. One of them has `Value(\"null\")` (representing missing data)\n\nFor struct types (nested dictionaries like your tool schemas), **all fields must match exactly**. This ensures type safety and efficient columnar storage.\n\n## Workarounds for Your Use Case\n Store tools as JSON strings\n\nInstead of using nested struct types, store the tool definitions as JSON strings\n\n\n" ]
2025-11-18T08:33:04
2025-11-30T03:52:07
null
NONE
null
null
null
null
Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model. Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions. When I try to merge datasets containing different tool definitions, I get the following error: TypeError: Couldn't cast array of type struct<refundFee: struct<description: string, type: string>, ... , servicerId: struct<description: string, type: string>> to { 'refundFee': {'description': Value(dtype='string'), 'type': Value(dtype='string')}, ... 'templateId': {'description': Value(dtype='string'), 'type': Value(dtype='string')} } From my understanding, the merge fails because the tools column's nested structure is different across datasets — e.g., one struct contains an extra field servicerId while the other does not. This causes HuggingFace Datasets (and its underlying Apache Arrow schema) to reject the merge. My question is: why is it designed this way? Is this strict schema matching a hard requirement of the library? Is there a recommended way to merge datasets with different tool schemas (different parameters and types)? For an agent model supporting multiple tools, what's the best practice for preparing/merging training data without losing flexibility? Any guidance or design rationale would be greatly appreciated. Thanks!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7869/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7869/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7868/comments
https://api.github.com/repos/huggingface/datasets/issues/7868/events
https://github.com/huggingface/datasets/issues/7868
3,632,429,308
I_kwDODunzps7Ygnj8
7,868
Data duplication with `split_dataset_by_node` and `interleaved_dataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42485228?v=4", "events_url": "https://api.github.com/users/ValMystletainn/events{/privacy}", "followers_url": "https://api.github.com/users/ValMystletainn/followers", "following_url": "https://api.github.com/users/ValMystletainn/following{/other_user}", "gists_url": "https://api.github.com/users/ValMystletainn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ValMystletainn", "id": 42485228, "login": "ValMystletainn", "node_id": "MDQ6VXNlcjQyNDg1MjI4", "organizations_url": "https://api.github.com/users/ValMystletainn/orgs", "received_events_url": "https://api.github.com/users/ValMystletainn/received_events", "repos_url": "https://api.github.com/users/ValMystletainn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ValMystletainn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ValMystletainn/subscriptions", "type": "User", "url": "https://api.github.com/users/ValMystletainn", "user_view_type": "public" }
[]
open
false
null
[]
[ "Hi @ValMystletainn ,\nCan I be assigned this issue?", "> split_dataset_by_node\n\nHello, I have some questions about your intended use: (1) It seems unnecessary to use interleaving for a single dataset. (2) For multiple datasets, it seems possible to interleave first and then split by node?" ]
2025-11-17T09:15:24
2025-11-29T03:21:34
null
NONE
null
null
null
null
### Describe the bug Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset` ### Steps to reproduce the bug I have provide a minimum scripts ```python import os from datasets import interleave_datasets, load_dataset from datasets.distributed import split_dataset_by_node path = "/mnt/wwx/datasets/fineweb/data/CC-MAIN-2013-20/" files = [os.path.join(path, fn) for fn in os.listdir(path)] dataset = load_dataset("parquet", split="train", data_files=files, streaming=True) print(f"{dataset.n_shards=}") dataset_rank0 = split_dataset_by_node(dataset, 0, 4) dataset_rank1 = split_dataset_by_node(dataset, 1, 4) dataset_rank0_interleaved = interleave_datasets([dataset_rank0], seed=42, probabilities=[1.0]) dataset_rank1_interleaved = interleave_datasets([dataset_rank1], seed=42, probabilities=[1.0]) print("print the first sample id from all datasets") print("dataset", next(iter(dataset))['id']) print("dataset_rank0", next(iter(dataset_rank0))['id']) print("dataset_rank1", next(iter(dataset_rank1))['id']) print("dataset_rank0_interleaved", next(iter(dataset_rank0_interleaved))['id']) print("dataset_rank1_interleaved", next(iter(dataset_rank1_interleaved))['id']) dataset_rank0_shard = dataset.shard(4, 0) dataset_rank1_shard = dataset.shard(4, 1) dataset_rank0_shard_interleaved = interleave_datasets([dataset_rank0_shard], seed=42, probabilities=[1.0]) dataset_rank1_shard_interleaved = interleave_datasets([dataset_rank1_shard], seed=42, probabilities=[1.0]) print("dataset_rank0_shard", next(iter(dataset_rank0_shard))['id']) print("dataset_rank1_shard", next(iter(dataset_rank1_shard))['id']) print("dataset_rank0_shard_interleaved", next(iter(dataset_rank0_shard_interleaved))['id']) print("dataset_rank1_shard_interleaved", next(iter(dataset_rank1_shard_interleaved))['id']) ``` I just use a subfold of C4 with 14 paruets to do the quick run and get ``` dataset.n_shards=14 print the first sample id from all datasets dataset <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae> dataset_rank0 <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae> dataset_rank1 <urn:uuid:6b7da64f-c26e-4086-aef5-4b6f01106223> dataset_rank0_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae> dataset_rank1_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae> dataset_rank0_shard <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae> dataset_rank1_shard <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51> dataset_rank0_shard_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae> dataset_rank1_shard_interleaved <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51> ``` ### Expected behavior the first sample of `dataset_rank0_interleaved` and `dataset_rank1_interleaved` should be different, as other `rank0` `rank1` couples. I have dive into the function and try to find how it work in `split -> interleaved` process. the `split_dataset_by_node` of iterable dataset does't not change `._ex_iterable` attribute of the dataset. it just set the distributed config in dataset, and the distributed dataset is used in actually `__iter__` call, to handle with shard split or sample skipping. however, in `interleaved_dataset` of iterable dataset. it copy out all of the `._ex_iterable` of provided datasets, and consist a new `_ex_iterable`, so the missing copy of `distributed config` caused the data duplication in different dp rank. So I may first ask, is it an unexpected using order of those function, which means: - always do `split_dataset_by_node` at final rather than in middle way. - or use `dataset.shard(dp_size, dp_rank)` rather than `split_dataset_by_node` in case similar of mine. if the using order is permiited, I think it is a bug, and I can do a PR to fix it (I meet this bug in real training, related issue is https://github.com/ByteDance-Seed/VeOmni/issues/200 if it helps. ### Environment info datasets 4.4.1 ubuntu 20.04 python 3.11.4
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7868/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7868/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7867/comments
https://api.github.com/repos/huggingface/datasets/issues/7867/events
https://github.com/huggingface/datasets/issues/7867
3,620,931,722
I_kwDODunzps7X0wiK
7,867
NonMatchingSplitsSizesError when loading partial dataset files
{ "avatar_url": "https://avatars.githubusercontent.com/u/13678719?v=4", "events_url": "https://api.github.com/users/QingGo/events{/privacy}", "followers_url": "https://api.github.com/users/QingGo/followers", "following_url": "https://api.github.com/users/QingGo/following{/other_user}", "gists_url": "https://api.github.com/users/QingGo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/QingGo", "id": 13678719, "login": "QingGo", "node_id": "MDQ6VXNlcjEzNjc4NzE5", "organizations_url": "https://api.github.com/users/QingGo/orgs", "received_events_url": "https://api.github.com/users/QingGo/received_events", "repos_url": "https://api.github.com/users/QingGo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/QingGo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/QingGo/subscriptions", "type": "User", "url": "https://api.github.com/users/QingGo", "user_view_type": "public" }
[]
open
false
null
[]
[ "While using verification_mode='no_checks' parameter in load_dataset() can bypass this validation, this solution is not intuitive or convenient for most users, especially those who are not familiar with all the parameters of the load_dataset() function.\n\n```python\nbook_corpus_ds = load_dataset(\n \"SaylorTwift/the_pile_books3_minus_gutenberg\",\n name=\"default\",\n data_files=\"data/train-00000-of-00213-312fd8d7a3c58a63.parquet\",\n split=\"train\",\n cache_dir=\"./data\",\n verification_mode='no_checks'\n)\n```", "Thanks for the report and reproduction steps @QingGo \n@lhoestq which one of the following looks like a nicer way to handle this?\n\n1] Skip split-size validation entirely for partial loads\nIf the user passes data_files manually and it represents only a subset, then verify_splits() should simply not run, or skip validation only for that split.\n\n2] Replace the error with a warning\n\n3] Automatically detect partial-load cases(i mean we can try this out!)\n\nAssume this, \nIf data_files is provided AND\nthe number of provided files ≠ number of expected files in metadata,\nthen treat it as a partial load and disable strict verification.\n" ]
2025-11-13T12:03:23
2025-11-16T15:39:23
null
NONE
null
null
null
null
### Describe the bug When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets. ### Steps to reproduce the bug 1. Use the Hugging Face `datasets` library to load a dataset with only specific files specified 2. Ensure the dataset repository has split metadata defined in README.md 3. Observe the error when attempting to load a subset of files ```python # Example code that triggers the error from datasets import load_dataset book_corpus_ds = load_dataset( "SaylorTwift/the_pile_books3_minus_gutenberg", name="default", data_files="data/train-00000-of-00213-312fd8d7a3c58a63.parquet", split="train", cache_dir="./data" ) ``` ### Error Message ``` Traceback (most recent call last): File "/Users/QingGo/code/llm_learn/src/data/clean_cc_bc.py", line 13, in <module> book_corpus_ds = load_dataset( "SaylorTwift/the_pile_books3_minus_gutenberg", ... File "/Users/QingGo/code/llm_learn/.venv/lib/python3.13/site-packages/datasets/utils/info_utils.py", line 77, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.exceptions.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=106199627990.47722, num_examples=192661, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=454897326, num_examples=905, shard_lengths=None, dataset_name='the_pile_books3_minus_gutenberg')}] ``` ### Expected behavior When loading partial dataset files, the system should: 1. Skip the `NonMatchingSplitsSizesError` validation, OR 2. Only log a warning message instead of raising an error ### Environment info - `datasets` version: 4.3.0 - Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O - Python version: 3.13.2 - `huggingface_hub` version: 0.36.0 - PyArrow version: 22.0.0 - Pandas version: 2.3.3 - `fsspec` version: 2025.9.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7867/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7867/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7864/comments
https://api.github.com/repos/huggingface/datasets/issues/7864/events
https://github.com/huggingface/datasets/issues/7864
3,619,137,823
I_kwDODunzps7Xt6kf
7,864
add_column and add_item erroneously(?) require new_fingerprint parameter
{ "avatar_url": "https://avatars.githubusercontent.com/u/17151810?v=4", "events_url": "https://api.github.com/users/echthesia/events{/privacy}", "followers_url": "https://api.github.com/users/echthesia/followers", "following_url": "https://api.github.com/users/echthesia/following{/other_user}", "gists_url": "https://api.github.com/users/echthesia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/echthesia", "id": 17151810, "login": "echthesia", "node_id": "MDQ6VXNlcjE3MTUxODEw", "organizations_url": "https://api.github.com/users/echthesia/orgs", "received_events_url": "https://api.github.com/users/echthesia/received_events", "repos_url": "https://api.github.com/users/echthesia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/echthesia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/echthesia/subscriptions", "type": "User", "url": "https://api.github.com/users/echthesia", "user_view_type": "public" }
[]
open
false
null
[]
[ "Take this with a grain of salt, this is just my personal understanding:\nWhile you technically can overwrite the new_fingerprint with a string, e.g.\n```python\nt = d.add_column(\"new_column\", col_value, new_fingerprint=\"dummy_fp\")\nassert t._fingerprint == \"dummy_fp\" # this is true and will pass\n```\nthis is not desired since the fingerprint should be calculated based on the operations (and their arguments) to be unique. This is handled by the [fingerprint_transform](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6077) function which needs a \"new_fingerprint\" keyword argument and creates a unique hash if its value is not set, see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L432). So it is probably safer to not document this keyword, since one doesn't want the user to actually use it and it's only a feature in very limited cases for people really knowing what they are doing. The thing that might be bugging people who read the code is that `new_fingerprint` seems to be required for `add_item` and `add_column` but it is actually set by the decorator (in which's definition it is optional), so maybe changing the signature of `add_item` and `add_column` to `new_fingerprint: Optional[str] = None` would make sense, since this is also how it's handled in the other cases (created by claude):\n\n - [flatten](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2034)\n - [cast_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2165)\n - [remove_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2209)\n - [rename_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2263)\n - [rename_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2329)\n - [select_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2397)\n - [batch](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3760)\n - [filter](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3813)\n - [flatten_indices](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3959)\n - [select](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4038)\n - [_select_contiguous](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4128)\n - [sort](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4376)\n - [shuffle](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4506)\n - [train_test_split](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4641)\nSo as you mentioned, I believe the methods erronously require the `new_fingerprint` parameter and making them optional is a little consistency win." ]
2025-11-13T02:56:49
2025-11-24T20:33:59
null
NONE
null
null
null
null
### Describe the bug Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason it shouldn't be optional in these methods as well? ### Steps to reproduce the bug Reproduction steps: 1. Look at the function signature for add_column: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6078 2. Repeat for add_item: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6336 ### Expected behavior add_column and add_item should either set the fingerprint parameter to optional or include it in their docstrings ### Environment info Not environment-dependent
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7864/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7864/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7863/comments
https://api.github.com/repos/huggingface/datasets/issues/7863/events
https://github.com/huggingface/datasets/issues/7863
3,618,836,821
I_kwDODunzps7XsxFV
7,863
Support hosting lance / vortex / iceberg / zarr datasets on huggingface hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664715?v=4", "events_url": "https://api.github.com/users/pavanramkumar/events{/privacy}", "followers_url": "https://api.github.com/users/pavanramkumar/followers", "following_url": "https://api.github.com/users/pavanramkumar/following{/other_user}", "gists_url": "https://api.github.com/users/pavanramkumar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pavanramkumar", "id": 3664715, "login": "pavanramkumar", "node_id": "MDQ6VXNlcjM2NjQ3MTU=", "organizations_url": "https://api.github.com/users/pavanramkumar/orgs", "received_events_url": "https://api.github.com/users/pavanramkumar/received_events", "repos_url": "https://api.github.com/users/pavanramkumar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pavanramkumar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pavanramkumar/subscriptions", "type": "User", "url": "https://api.github.com/users/pavanramkumar", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
[ "Kudos!", "So cool ! Would love to see support for lance :)", "@lhoestq thanks for your support! Any suggestions across `datasets` or `huggingface_hub` projects to make this happen?\n\nI just noticed this blog post: https://huggingface.co/blog/streaming-datasets\n\nDo you know if `hfFileSystem` from `huggingface_hub` is flexible enough to accommodate lance? I don't want to `open` and scan a file, I want to create generators with the `lance.dataset.to_batches()` from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nIdeally, something like this should just work:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nLooking at the huggingface blog post, I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions) cc @prrao87, @changhiskhan", "> Do you know if HfFileSystem from huggingface_hub is flexible enough to accommodate lance?\n\nit provides file-like objects for files on HF, and works using range requests. PyArrow uses HfFileSystem for HF files already\n\nThough in the Parquet / PyArrow case the data is read generally row group per row group (using range requests with a minimum size `range_size_limit ` to optimize I/O in case of small row groups)\n\nPS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\n> I don't want to open and scan a file, I want to create generators with the lance.dataset.to_batches() from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nWe do something very similar for Parquet here: \n\nhttps://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/packaged_modules/parquet/parquet.py#L168-L169", "Hi, I work on the Lance project. We'd be happy to see the format supported on huggingface hub.\n\nIt's not clear to me from this thread what is required for that. Could we clarify that? Are there examples we can point to?\n\n> I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions)\n\nCould you elaborate why a `FragmentScanOptions` subclass is required? Also, if it is, we could just define that as a subclass within the `pylance` module, unless I'm missing something.\n\nLance supports OpenDAL storage, so I think we could add support for huggingface's filesystem through that and make sure it's exposed in pylance. Could also help implement some write operations. Perhaps that's the main blocker? ", "> PS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\nHi, I’m willing to add full-fledged support for the HF file system. This shouldn’t be considered a blocker. 🤟 ", "Exposing the existing HF filesystem from OpenDAL in pylance would be great ! and a good first step\n\nExcited for write operations too", "Thanks @lhoestq @wjones127 @Xuanwo ! I think we have all the necessary people on this thread now to make it happen :)\n\n> Could you elaborate why a FragmentScanOptions subclass is required? Also, if it is, we could just define that as a subclass within the pylance module, unless I'm missing something.\n\n@wjones127 I'm not actually sure this is needed but I'm guessing based on [this blog post](https://huggingface.co/blog/streaming-datasets) from a couple of weeks ago. Specifically, this section which allows creation of a dataset object with configurable prefetching:\n\n```\nimport pyarrow\nimport pyarrow.dataset\n\nfragment_scan_options = pyarrow.dataset.ParquetFragmentScanOptions(\n cache_options=pyarrow.CacheOptions(\n prefetch_limit=1,\n range_size_limit=128 << 20\n ),\n)\nds = load_dataset(parquet_dataset_id, streaming=True, fragment_scan_options=fragment_scan_options)\n```\n\nI might be completely wrong that we do need an equivalent `LanceFragmentScanOptions` PR into `pyarrow` and the `OpenDAL` path might be sufficient.\n\nI really just want something like this to work out of the box:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nIn the ideal case, I'd like to be able to control prefetch configuration via arguments to `to_batches()` like the ones that already exist for a lance dataset on any S3-compatible object store.\n\nWould a useful approach be to create a toy lance dataset on huggingface and see if this \"just works\"; then work backwards from there?\n\nAs for writing, I'm looking to migrate datasets from my own private S3-compatible object store bucket (Tigris Data) to huggingface datasets but ~~I'm 100% sure~~ I'm _not_ 100% sure whether we even need `hfFileSystem` compatible write capability\n\n\n", "Here's a public dataset which could be a working example to work backwards from:\n\nhttps://huggingface.co/datasets/pavan-ramkumar/test-slaf\n\npylance currently looks for default object store backends and returns this `ValueError`\n\n```\n>>> import lance\n>>> hf_path = \"hf://datasets/pavan-ramkumar/test-slaf/tree/main/synthetic_50k_processed_v21.slaf/expression.lance\"\n>>> ds = lance.dataset(hf_path)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/__init__.py\", line 145, in dataset\n ds = LanceDataset(\n ^^^^^^^^^^^^^\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/dataset.py\", line 425, in __init__\n self._ds = _Dataset(\n ^^^^^^^^^\nValueError: Invalid user input: No object store provider found for scheme: 'hf'\nValid schemes: gs, memory, s3, az, file-object-store, file, oss, s3+ddb, /Users/runner/work/lance/lance/rust/lance-io/src/object_store/providers.rs:161:54\n```", "@Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n\nDo let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub", "> @Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n> \n> Do let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub\n\nI'm willing to work on this! Would you like to create an issue on lance side and ping me there?", " > I'm willing to work on this! Would you like to create an issue on lance side and ping me there?\n\nDone! [Link](https://github.com/lance-format/lance/issues/5346)\n", "@pavanramkumar pls check this out once it's merged! https://github.com/lance-format/lance/pull/5353" ]
2025-11-13T00:51:07
2025-11-26T14:10:29
null
NONE
null
null
null
null
### Feature request Huggingface datasets has great support for large tabular datasets in parquet with large partitions. I would love to see two things in the future: - equivalent support for `lance`, `vortex`, `iceberg`, `zarr` (in that order) in a way that I can stream them using the datasets library - more fine-grained control of streaming, so that I can stream at the partition / shard level ### Motivation I work with very large `lance` datasets on S3 and often require random access for AI/ML applications like multi-node training. I was able to achieve high throughput dataloading on a lance dataset with ~150B rows by building distributed dataloaders that can be scaled both vertically (until i/o and CPU are saturated), and then horizontally (to workaround network bottlenecks). Using this strategy I was able to achieve 10-20x the throughput of the streaming data loader from the `huggingface/datasets` library. I realized that these would be great features for huggingface to support natively ### Your contribution I'm not ready yet to make a PR but open to it with the right pointers!
null
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 2, "heart": 5, "hooray": 2, "laugh": 2, "rocket": 8, "total_count": 23, "url": "https://api.github.com/repos/huggingface/datasets/issues/7863/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7863/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7861/comments
https://api.github.com/repos/huggingface/datasets/issues/7861/events
https://github.com/huggingface/datasets/issues/7861
3,611,821,713
I_kwDODunzps7XSAaR
7,861
Performance Issue: save_to_disk() 200-1200% slower due to unconditional flatten_indices()
{ "avatar_url": "https://avatars.githubusercontent.com/u/222552287?v=4", "events_url": "https://api.github.com/users/KCKawalkar/events{/privacy}", "followers_url": "https://api.github.com/users/KCKawalkar/followers", "following_url": "https://api.github.com/users/KCKawalkar/following{/other_user}", "gists_url": "https://api.github.com/users/KCKawalkar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KCKawalkar", "id": 222552287, "login": "KCKawalkar", "node_id": "U_kgDODUPg3w", "organizations_url": "https://api.github.com/users/KCKawalkar/orgs", "received_events_url": "https://api.github.com/users/KCKawalkar/received_events", "repos_url": "https://api.github.com/users/KCKawalkar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KCKawalkar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KCKawalkar/subscriptions", "type": "User", "url": "https://api.github.com/users/KCKawalkar", "user_view_type": "public" }
[]
open
false
null
[]
[]
2025-11-11T11:05:38
2025-11-11T11:05:38
null
NONE
null
null
null
null
## 🐛 Bug Description The `save_to_disk()` method unconditionally calls `flatten_indices()` when `_indices` is not None, causing severe performance degradation for datasets processed with filtering, shuffling, or multiprocessed mapping operations. **Root cause**: This line rebuilds the entire dataset unnecessarily: ```python dataset = self.flatten_indices() if self._indices is not None else self ``` ## 📊 Performance Impact | Dataset Size | Operation | Save Time | Slowdown | |-------------|-----------|-----------|----------| | 100K | Baseline (no indices) | 0.027s | - | | 100K | Filtered (with indices) | 0.146s | **+431%** | | 100K | Shuffled (with indices) | 0.332s | **+1107%** | | 250K | Shuffled (with indices) | 0.849s | **+1202%** | ## 🔄 Reproduction ```python from datasets import Dataset import time # Create dataset dataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]}) # Baseline save (no indices) start = time.time() dataset.save_to_disk('baseline') baseline_time = time.time() - start # Filtered save (creates indices) filtered = dataset.filter(lambda x: True) start = time.time() filtered.save_to_disk('filtered') filtered_time = time.time() - start print(f"Baseline: {baseline_time:.3f}s") print(f"Filtered: {filtered_time:.3f}s") print(f"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%") ``` **Expected output**: Filtered dataset is 400-1000% slower than baseline ## 💡 Proposed Solution Add optional parameter to control flattening: ```python def save_to_disk(self, dataset_path, flatten_indices=True): dataset = self.flatten_indices() if (self._indices is not None and flatten_indices) else self # ... rest of save logic ``` **Benefits**: - ✅ Immediate performance improvement for users who don't need flattening - ✅ Backwards compatible (default behavior unchanged) - ✅ Simple implementation ## 🌍 Environment - **datasets version**: 2.x - **Python**: 3.10+ - **OS**: Linux/macOS/Windows ## 📈 Impact This affects **most ML preprocessing workflows** that filter/shuffle datasets before saving. Performance degradation scales exponentially with dataset size, making it a critical bottleneck for production systems. ## 🔗 Additional Resources We have comprehensive test scripts demonstrating this across multiple scenarios if needed for further investigation.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7861/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7861/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7856/comments
https://api.github.com/repos/huggingface/datasets/issues/7856/events
https://github.com/huggingface/datasets/issues/7856
3,603,729,142
I_kwDODunzps7WzIr2
7,856
Missing transcript column when loading a local dataset with "audiofolder"
{ "avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4", "events_url": "https://api.github.com/users/gweltou/events{/privacy}", "followers_url": "https://api.github.com/users/gweltou/followers", "following_url": "https://api.github.com/users/gweltou/following{/other_user}", "gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gweltou", "id": 10166907, "login": "gweltou", "node_id": "MDQ6VXNlcjEwMTY2OTA3", "organizations_url": "https://api.github.com/users/gweltou/orgs", "received_events_url": "https://api.github.com/users/gweltou/received_events", "repos_url": "https://api.github.com/users/gweltou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gweltou/subscriptions", "type": "User", "url": "https://api.github.com/users/gweltou", "user_view_type": "public" }
[]
closed
false
null
[]
[ "First bad commit 5c8869f8c36dbc8c8d423030b7b7c4fd64f8c729\n\nEDIT: This is not a bug or a regression. It was a breaking change introduced in the commit I mentioned and was also documented in there. The docs state how to handle this now, see https://huggingface.co/docs/datasets/main/en/audio_load#audiofolder-with-metadata\n\nor simply, move your metadata into the splits folder and update the paths, in your case this would look like this:\n```bash\nmy_dataset/\n - data/\n - test/\n - 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\n - 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\n - metadata.jsonl\n```\n\nand the pahts in the jsonl should be relative to the metadata.json:\n```bash\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\", \"transcript\": \"Ata tudoù penaos e tro ar bed ?\"}\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\", \"transcript\": \"Ur gwir blijadur eo adkavout ac'hanoc'h hiziv.\"}\n...\n```\n\nSo I think this can be closed.", "Thank you for your quick answer !\nI'm sorry I missed that in the documentation.\nEverything works fine again after following your recommendations.\nI'm closing the issue." ]
2025-11-08T16:27:58
2025-11-09T12:13:38
2025-11-09T12:13:38
NONE
null
null
null
null
### Describe the bug My local dataset is not properly loaded when using `load_dataset("audiofolder", data_dir="my_dataset")` with a `jsonl` metadata file. Only the `audio` column is read while the `transcript` column is not. The last tested `datasets` version where the behavior was still correct is 2.18.0. ### Steps to reproduce the bug Dataset directory structure: ``` my_dataset/ - data/ - test/ - 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3 - 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3 - ... - metadata.jsonl ``` `metadata.jsonl` file content: ``` {"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3", "transcript": "Ata tudoù penaos e tro ar bed ?"} {"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3", "transcript": "Ur gwir blijadur eo adkavout ac'hanoc'h hiziv."} ... ``` ```python3 my_dataset = load_dataset("audiofolder", data_dir="my_dataset") print(my_dataset) ''' DatasetDict({ test: Dataset({ features: ['audio'], num_rows: 347 }) }) ''' print(my_dataset['test'][0]) ''' {'audio': <datasets.features._torchcodec.AudioDecoder object at 0x75ffcd172510>} ''' ``` ### Expected behavior Being able to access the `transcript` column in the loaded dataset. ### Environment info - `datasets` version: 4.4.1 - Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.39 - Python version: 3.13.9 - `huggingface_hub` version: 1.1.2 - PyArrow version: 22.0.0 - Pandas version: 2.3.3 - `fsspec` version: 2025.10.0 Note: same issue with `datasets` v3.6.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4", "events_url": "https://api.github.com/users/gweltou/events{/privacy}", "followers_url": "https://api.github.com/users/gweltou/followers", "following_url": "https://api.github.com/users/gweltou/following{/other_user}", "gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gweltou", "id": 10166907, "login": "gweltou", "node_id": "MDQ6VXNlcjEwMTY2OTA3", "organizations_url": "https://api.github.com/users/gweltou/orgs", "received_events_url": "https://api.github.com/users/gweltou/received_events", "repos_url": "https://api.github.com/users/gweltou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gweltou/subscriptions", "type": "User", "url": "https://api.github.com/users/gweltou", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7856/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7856/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
19:45:40
https://api.github.com/repos/huggingface/datasets/issues/7852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7852/comments
https://api.github.com/repos/huggingface/datasets/issues/7852/events
https://github.com/huggingface/datasets/issues/7852
3,595,450,602
I_kwDODunzps7WTjjq
7,852
Problems with NifTI
{ "avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4", "events_url": "https://api.github.com/users/CloseChoice/events{/privacy}", "followers_url": "https://api.github.com/users/CloseChoice/followers", "following_url": "https://api.github.com/users/CloseChoice/following{/other_user}", "gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CloseChoice", "id": 31857876, "login": "CloseChoice", "node_id": "MDQ6VXNlcjMxODU3ODc2", "organizations_url": "https://api.github.com/users/CloseChoice/orgs", "received_events_url": "https://api.github.com/users/CloseChoice/received_events", "repos_url": "https://api.github.com/users/CloseChoice/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions", "type": "User", "url": "https://api.github.com/users/CloseChoice", "user_view_type": "public" }
[]
closed
false
null
[]
[ "> 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n\nwhat did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't", "> > 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n> \n> what did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't\n\nI used `push_to_hub` but the problem is that the nifti feature does not have an `embed_storage` function" ]
2025-11-06T11:46:33
2025-11-06T16:20:38
2025-11-06T16:20:38
CONTRIBUTOR
null
null
null
null
### Describe the bug There are currently 2 problems with the new NifTI feature: 1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503) 2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative paths to the nifti files: ```bash table['nifti'] <pyarrow.lib.ChunkedArray object at 0x798245d37d60> [ -- is_valid: all not null -- child 0 type: binary [ null, null, null, null, null, null ] -- child 1 type: string [ "/home/tobias/programming/github/datasets/nifti_extracted/T1.nii", "/home/tobias/programming/github/datasets/nifti_extracted/T2-interleaved.nii", "/home/tobias/programming/github/datasets/nifti_extracted/T2.nii", "/home/tobias/programming/github/datasets/nifti_extracted/T2_-interleaved.nii", "/home/tobias/programming/github/datasets/nifti_extracted/T2_.nii", "/home/tobias/programming/github/datasets/nifti_extracted/fieldmap.nii" ] ] ``` instead of containing bytes. The code is copy pasted from PDF, so I wonder what is going wrong here. ### Steps to reproduce the bug see the linked comment ### Expected behavior downloading should work as smoothly as for pdf ### Environment info - `datasets` version: 4.4.2.dev0 - Platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.35.3 - PyArrow version: 21.0.0 - Pandas version: 2.3.3 - `fsspec` version: 2025.9.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7852/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7852/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
4:34:05
https://api.github.com/repos/huggingface/datasets/issues/7842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7842/comments
https://api.github.com/repos/huggingface/datasets/issues/7842/events
https://github.com/huggingface/datasets/issues/7842
3,582,182,995
I_kwDODunzps7Vg8ZT
7,842
Transform with columns parameter triggers on non-specified column access
{ "avatar_url": "https://avatars.githubusercontent.com/u/18426892?v=4", "events_url": "https://api.github.com/users/mr-brobot/events{/privacy}", "followers_url": "https://api.github.com/users/mr-brobot/followers", "following_url": "https://api.github.com/users/mr-brobot/following{/other_user}", "gists_url": "https://api.github.com/users/mr-brobot/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mr-brobot", "id": 18426892, "login": "mr-brobot", "node_id": "MDQ6VXNlcjE4NDI2ODky", "organizations_url": "https://api.github.com/users/mr-brobot/orgs", "received_events_url": "https://api.github.com/users/mr-brobot/received_events", "repos_url": "https://api.github.com/users/mr-brobot/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mr-brobot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mr-brobot/subscriptions", "type": "User", "url": "https://api.github.com/users/mr-brobot", "user_view_type": "public" }
[]
closed
false
null
[]
[]
2025-11-03T13:55:27
2025-11-03T14:34:13
2025-11-03T14:34:13
NONE
null
null
null
null
### Describe the bug Iterating over a [`Column`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L633-L692) iterates through the parent [`Dataset`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L695) and applies all formatting/transforms on each row, regardless of which column is being accessed. This causes an error when transforms depend on columns not present in the projection. ### Steps to reproduce the bug ### Load a dataset with multiple columns ```python ds = load_dataset("mrbrobot/isic-2024", split="train") ``` ### Define a transform that specifies an input column ```python def image_transform(batch): batch["image"] = batch["image"] # KeyError when batch doesn't contain "image" return batch # apply transform only to image column ds = ds.with_format("torch") ds = ds.with_transform(image_transform, columns=["image"], output_all_columns=True) ``` ### Iterate over non-specified column ```python # iterate over a different column, triggers the transform on each row, but batch doesn't contain "image" for t in ds["target"]: # KeyError: 'image' print(t) ``` ### Expected behavior If a user iterates over `ds["target"]` and the transform specifies `columns=["image"]`, the transform should be skipped. ### Environment info `datasets`: 4.2.0 Python: 3.12.12 Linux: Debian 11.11
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7842/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7842/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:38:46
https://api.github.com/repos/huggingface/datasets/issues/7841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7841/comments
https://api.github.com/repos/huggingface/datasets/issues/7841/events
https://github.com/huggingface/datasets/issues/7841
3,579,506,747
I_kwDODunzps7VWvA7
7,841
DOC: `mode` parameter on pdf and video features unused
{ "avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4", "events_url": "https://api.github.com/users/CloseChoice/events{/privacy}", "followers_url": "https://api.github.com/users/CloseChoice/followers", "following_url": "https://api.github.com/users/CloseChoice/following{/other_user}", "gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CloseChoice", "id": 31857876, "login": "CloseChoice", "node_id": "MDQ6VXNlcjMxODU3ODc2", "organizations_url": "https://api.github.com/users/CloseChoice/orgs", "received_events_url": "https://api.github.com/users/CloseChoice/received_events", "repos_url": "https://api.github.com/users/CloseChoice/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions", "type": "User", "url": "https://api.github.com/users/CloseChoice", "user_view_type": "public" }
[]
closed
false
null
[]
[ "They seem to be artefacts from a copy-paste of the Image feature ^^' we should remove them" ]
2025-11-02T12:37:47
2025-11-05T14:04:04
2025-11-05T14:04:04
CONTRIBUTOR
null
null
null
null
Following up on https://github.com/huggingface/datasets/pull/7840 I asked claude code to check for undocumented parameters for other features and it found: - mode parameter on video is documented but unused: https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py#L48-L49 - the same goes for the mode parameter on the pdf feature: https://github.com/huggingface/datasets/blob/main/src/datasets/features/pdf.py#L47-L48 I assume checking if these modes can be supported and otherwise removing them is the way to go here.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7841/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7841/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
3 days, 1:26:17
https://api.github.com/repos/huggingface/datasets/issues/7839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7839/comments
https://api.github.com/repos/huggingface/datasets/issues/7839/events
https://github.com/huggingface/datasets/issues/7839
3,579,121,843
I_kwDODunzps7VVRCz
7,839
datasets doesn't work with python 3.14
{ "avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4", "events_url": "https://api.github.com/users/zachmoshe/events{/privacy}", "followers_url": "https://api.github.com/users/zachmoshe/followers", "following_url": "https://api.github.com/users/zachmoshe/following{/other_user}", "gists_url": "https://api.github.com/users/zachmoshe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zachmoshe", "id": 4789087, "login": "zachmoshe", "node_id": "MDQ6VXNlcjQ3ODkwODc=", "organizations_url": "https://api.github.com/users/zachmoshe/orgs", "received_events_url": "https://api.github.com/users/zachmoshe/received_events", "repos_url": "https://api.github.com/users/zachmoshe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zachmoshe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zachmoshe/subscriptions", "type": "User", "url": "https://api.github.com/users/zachmoshe", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Thanks for the report.\nHave you tried on main? This should work, there was recently a PR merged to address this problem, see #7817", "Works on main 👍 \nWhat's the release schedule for `datasets`? Seems like a cadence of ~2weeks so I assume a real version is due pretty soon?", "let's say we do a new release later today ? :)", "Premium service! \n😂 👑 \nJust checked 4.4.0 - works as expected!" ]
2025-11-02T09:09:06
2025-11-04T14:02:25
2025-11-04T14:02:25
NONE
null
null
null
null
### Describe the bug Seems that `dataset` doesn't work with python==3.14. The root cause seems to be something with a `deel` API that was changed. ``` TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given ``` ### Steps to reproduce the bug (on a new folder) uv init uv python pin 3.14 uv add datasets uv run python (in REPL) import datasets datasets.load_dataset("cais/mmlu", "all") # will fail on any dataset ``` >>> datasets.load_dataset("cais/mmlu", "all") Traceback (most recent call last): File "<python-input-2>", line 1, in <module> datasets.load_dataset("cais/mmlu", "all") ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset builder_instance = load_dataset_builder( path=path, ...<10 lines>... **config_kwargs, ) File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder builder_instance._use_legacy_cache_dir_if_possible(dataset_module) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 615, in _use_legacy_cache_dir_if_possible self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 487, in _check_legacy_cache2 config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files}) ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash return cls.hash_bytes(dumps(value)) ~~~~~^^^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps dump(obj, file) ~~~~^^^^^^^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump Pickler(file, recurse=True).dump(obj) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump StockPickler.dump(self, obj) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^ File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 498, in dump self.save(obj) ~~~~~~~~~^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save StockPickler.save(self, obj, save_persistent_id) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 572, in save f(self, obj) # Call unbound method with explicit self ~^^^^^^^^^^^ File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict StockPickler.save_dict(pickler, obj) ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^ File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 1064, in save_dict self._batch_setitems(obj.items(), obj) ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^ TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given ``` ### Expected behavior should work. ### Environment info datasets==v4.3.0 python==3.14
{ "avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4", "events_url": "https://api.github.com/users/zachmoshe/events{/privacy}", "followers_url": "https://api.github.com/users/zachmoshe/followers", "following_url": "https://api.github.com/users/zachmoshe/following{/other_user}", "gists_url": "https://api.github.com/users/zachmoshe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zachmoshe", "id": 4789087, "login": "zachmoshe", "node_id": "MDQ6VXNlcjQ3ODkwODc=", "organizations_url": "https://api.github.com/users/zachmoshe/orgs", "received_events_url": "https://api.github.com/users/zachmoshe/received_events", "repos_url": "https://api.github.com/users/zachmoshe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zachmoshe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zachmoshe/subscriptions", "type": "User", "url": "https://api.github.com/users/zachmoshe", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7839/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7839/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 4:53:19
https://api.github.com/repos/huggingface/datasets/issues/7837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7837/comments
https://api.github.com/repos/huggingface/datasets/issues/7837/events
https://github.com/huggingface/datasets/issues/7837
3,575,454,726
I_kwDODunzps7VHRwG
7,837
mono parameter to the Audio feature is missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4", "events_url": "https://api.github.com/users/ernestum/events{/privacy}", "followers_url": "https://api.github.com/users/ernestum/followers", "following_url": "https://api.github.com/users/ernestum/following{/other_user}", "gists_url": "https://api.github.com/users/ernestum/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ernestum", "id": 1250234, "login": "ernestum", "node_id": "MDQ6VXNlcjEyNTAyMzQ=", "organizations_url": "https://api.github.com/users/ernestum/orgs", "received_events_url": "https://api.github.com/users/ernestum/received_events", "repos_url": "https://api.github.com/users/ernestum/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ernestum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ernestum/subscriptions", "type": "User", "url": "https://api.github.com/users/ernestum", "user_view_type": "public" }
[]
closed
false
null
[]
[ "Hey, we removed the misleading passage in the docstring and enabled support for `num_channels` as torchcodec does", "thanks!" ]
2025-10-31T15:41:39
2025-11-03T15:59:18
2025-11-03T14:24:12
NONE
null
null
null
null
According to the docs, there is a "mono" parameter to the Audio feature, which turns any stereo into mono. In practice the signal is not touched and the mono parameter, even though documented, does not exist. https://github.com/huggingface/datasets/blob/41c05299348a499807432ab476e1cdc4143c8772/src/datasets/features/audio.py#L52C1-L54C22
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7837/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
2 days, 22:42:33
https://api.github.com/repos/huggingface/datasets/issues/7834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7834/comments
https://api.github.com/repos/huggingface/datasets/issues/7834/events
https://github.com/huggingface/datasets/issues/7834
3,558,802,959
I_kwDODunzps7UHwYP
7,834
Audio.cast_column() or Audio.decode_example() causes Colab kernel crash (std::bad_alloc)
{ "avatar_url": "https://avatars.githubusercontent.com/u/2559570?v=4", "events_url": "https://api.github.com/users/rachidio/events{/privacy}", "followers_url": "https://api.github.com/users/rachidio/followers", "following_url": "https://api.github.com/users/rachidio/following{/other_user}", "gists_url": "https://api.github.com/users/rachidio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rachidio", "id": 2559570, "login": "rachidio", "node_id": "MDQ6VXNlcjI1NTk1NzA=", "organizations_url": "https://api.github.com/users/rachidio/orgs", "received_events_url": "https://api.github.com/users/rachidio/received_events", "repos_url": "https://api.github.com/users/rachidio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rachidio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rachidio/subscriptions", "type": "User", "url": "https://api.github.com/users/rachidio", "user_view_type": "public" }
[]
open
false
null
[]
[ "Hi ! `datasets` v4 uses `torchcodec` for audio decoding (previous versions were using `soundfile`). What is your `torchcodec` version ? Can you try other versions of `torchcodec` and see if it works ?", "When I install `datasets` with `pip install datasets[audio]` it install this version of `torchcodec`:\n```\nName: torchcodec\nVersion: 0.8.1\n```\nCan you please point to a working version of `torchcodec`?\n\nThanks for your help", "I believe you simply need to make sure the torchcodec and torch versions work together. Here is how to fix it:\n\n```python\n!pip install -U torchcodec torch\n```", "I am also encountering this same issue when i run `print(ug_court[\"train\"][0])` to view the features of the first row of my audio data", "the problem still goes on to when i force training with seeing these features", "Thank you @lhoestq I've reinstalled the packages an the error is gone.\nMy new versions are:\n```\nName: torch\nVersion: 2.8.0\n---\nName: torchaudio\nVersion: 2.8.0\n---\nName: torchcodec\nVersion: 0.8.1\n```\n\nRegards", "mine too has worked ", "Hi,\n\nI encounter the same problem when trying to inspect the first element in the dataset. My environment is:\n```\nroot@3ac6f9f8c6c4:/workspace# pip3 list | grep torch\npytorch-lightning 2.5.6\npytorch-metric-learning 2.9.0\ntorch 2.8.0+cu126\ntorch-audiomentations 0.12.0\ntorch_pitch_shift 1.2.5\ntorchaudio 2.8.0+cu126\ntorchcodec 0.8.1\ntorchelastic 0.2.2\ntorchmetrics 1.8.2\ntorchvision 0.23.0+cu126\n```\nthe same as @rachidio 's new version that works.\n\nI am in a Docker container environment, and here is the code I am working with:\n\n<img width=\"1350\" height=\"388\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/4cf0400f-9ee7-47c7-ba57-c4ef3c1e7fd6\" />" ]
2025-10-27T22:02:00
2025-11-15T16:28:04
null
NONE
null
null
null
null
### Describe the bug When using the huggingface datasets.Audio feature to decode a local or remote (public HF dataset) audio file inside Google Colab, the notebook kernel crashes with std::bad_alloc (C++ memory allocation failure). The crash happens even with a minimal code example and valid .wav file that can be read successfully using soundfile. Here is a sample Collab notebook to reproduce the problem. https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing code sample: ``` ... audio_dataset = audio_dataset.cast_column("audio", Audio(sampling_rate=16000)) # Accessing the first element crashes the Colab kernel print(audio_dataset[0]["audio"]) ``` Error log ``` WARNING what(): std::bad_alloc terminate called after throwing an instance of 'std::bad_alloc' ``` Environment Platform: Google Colab (Python 3.12.12) datasets Version: 4.3.0 soundfile Version: 0.13.1 torchaudio Version: 2.8.0+cu126 Thanks in advance to help me on this error I get approx two weeks now after it was working before. Regards ### Steps to reproduce the bug https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing ### Expected behavior Loading the audio and decode it. It should safely return: { "path": "path/filaname.wav", "array": np.ndarray([...]), "sampling_rate": 16000 } ### Environment info Environment Platform: Google Colab (Python 3.12.12) datasets Version: 4.3.0 soundfile Version: 0.13.1 torchaudio Version: 2.8.0+cu126
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7834/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7834/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7832/comments
https://api.github.com/repos/huggingface/datasets/issues/7832/events
https://github.com/huggingface/datasets/issues/7832
3,555,991,552
I_kwDODunzps7T9CAA
7,832
[DOCS][minor] TIPS paragraph not compiled in docs/stream
{ "avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4", "events_url": "https://api.github.com/users/art-test-stack/events{/privacy}", "followers_url": "https://api.github.com/users/art-test-stack/followers", "following_url": "https://api.github.com/users/art-test-stack/following{/other_user}", "gists_url": "https://api.github.com/users/art-test-stack/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/art-test-stack", "id": 110672812, "login": "art-test-stack", "node_id": "U_kgDOBpi7rA", "organizations_url": "https://api.github.com/users/art-test-stack/orgs", "received_events_url": "https://api.github.com/users/art-test-stack/received_events", "repos_url": "https://api.github.com/users/art-test-stack/repos", "site_admin": false, "starred_url": "https://api.github.com/users/art-test-stack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/art-test-stack/subscriptions", "type": "User", "url": "https://api.github.com/users/art-test-stack", "user_view_type": "public" }
[]
closed
false
null
[]
[]
2025-10-27T10:03:22
2025-10-27T10:10:54
2025-10-27T10:10:54
CONTRIBUTOR
null
null
null
null
In the client documentation, the markdown 'TIP' paragraph for paragraph in docs/stream#shuffle is not well executed — not as the other in the same page / while markdown is correctly considering it. Documentation: https://huggingface.co/docs/datasets/v4.3.0/en/stream#shuffle:~:text=%5B!TIP%5D%5BIterableDataset.shuffle()%5D(/docs/datasets/v4.3.0/en/package_reference/main_classes%23datasets.IterableDataset.shuffle)%20will%20also%20shuffle%20the%20order%20of%20the%20shards%20if%20the%20dataset%20is%20sharded%20into%20multiple%20files. Github source: https://github.com/huggingface/datasets/blob/main/docs/source/stream.mdx#:~:text=Casting%20only%20works%20if%20the%20original%20feature%20type%20and%20new%20feature%20type%20are%20compatible.%20For%20example%2C%20you%20can%20cast%20a%20column%20with%20the%20feature%20type%20Value(%27int32%27)%20to%20Value(%27bool%27)%20if%20the%20original%20column%20only%20contains%20ones%20and%20zeros.
{ "avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4", "events_url": "https://api.github.com/users/art-test-stack/events{/privacy}", "followers_url": "https://api.github.com/users/art-test-stack/followers", "following_url": "https://api.github.com/users/art-test-stack/following{/other_user}", "gists_url": "https://api.github.com/users/art-test-stack/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/art-test-stack", "id": 110672812, "login": "art-test-stack", "node_id": "U_kgDOBpi7rA", "organizations_url": "https://api.github.com/users/art-test-stack/orgs", "received_events_url": "https://api.github.com/users/art-test-stack/received_events", "repos_url": "https://api.github.com/users/art-test-stack/repos", "site_admin": false, "starred_url": "https://api.github.com/users/art-test-stack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/art-test-stack/subscriptions", "type": "User", "url": "https://api.github.com/users/art-test-stack", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7832/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7832/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
0:07:32
https://api.github.com/repos/huggingface/datasets/issues/7829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7829/comments
https://api.github.com/repos/huggingface/datasets/issues/7829/events
https://github.com/huggingface/datasets/issues/7829
3,548,584,085
I_kwDODunzps7TgxiV
7,829
Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict
{ "avatar_url": "https://avatars.githubusercontent.com/u/24591024?v=4", "events_url": "https://api.github.com/users/raphaelsty/events{/privacy}", "followers_url": "https://api.github.com/users/raphaelsty/followers", "following_url": "https://api.github.com/users/raphaelsty/following{/other_user}", "gists_url": "https://api.github.com/users/raphaelsty/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/raphaelsty", "id": 24591024, "login": "raphaelsty", "node_id": "MDQ6VXNlcjI0NTkxMDI0", "organizations_url": "https://api.github.com/users/raphaelsty/orgs", "received_events_url": "https://api.github.com/users/raphaelsty/received_events", "repos_url": "https://api.github.com/users/raphaelsty/repos", "site_admin": false, "starred_url": "https://api.github.com/users/raphaelsty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raphaelsty/subscriptions", "type": "User", "url": "https://api.github.com/users/raphaelsty", "user_view_type": "public" }
[]
open
false
null
[]
[ "Thanks for the report, this is possibly related #7722 and #7694.\n\nCould you pls provide steps to reproduce this?", "To overcome this issue right now I did simply reduce the size of the dataset and ended up running a for loop (my training has now a constant learning rate schedule). From what I understood, and I don't know if it's possible, the solution would be to tell the backend of `datasets` to leave x% of the memory free (including memory mapping). Can't release the data right now but I will and then allow to reproduce this issue. But it will involve to have some free TB of disk", "@raphaelsty thanks for coming back to this. I assume you are running in streaming mode? That should prevent these errors but it looks like more people than just you have this problem, so a clearly reproducing example (including data + code) is highly appreciated.", "This could be related to this issue: https://github.com/huggingface/datasets/issues/4883 in which we discussed how RSS and memory mapping works and depends on the OS and disk type." ]
2025-10-24T09:51:38
2025-11-06T13:31:26
null
NONE
null
null
null
null
### Describe the bug Hi team, first off, I love the datasets library! 🥰 I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict. Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows. Training Task: I'm performing contrastive learning with SentenceTransformer and Accelerate on a single node with 4 H100, which requires me to sample from only one dataset at a time. Training Loop: At each training step, I sample ~16,000 examples from a single dataset, and then switch to a different dataset for the next step. I iterate through all 362 datasets this way. Problem: The process's memory usage continuously increases over time, eventually causing a stale status where GPUs would stop working. It seems memory from previously sampled datasets isn't being released. I've set num_workers=0 for all experiments. Chart 1: Standard DatasetDict The memory usage grows steadily until it make the training stale (RSS memory) <img width="773" height="719" alt="Image" src="https://github.com/user-attachments/assets/6606bef5-1153-4f2d-bf08-82da249d6e8d" /> Chart 2: IterableDatasetDict I also tried to use IterableDatasetDict and IterableDataset. The memory curve is "smoother," but the result is the same: it grows indefinitely and the training become stale. <img width="339" height="705" alt="Image" src="https://github.com/user-attachments/assets/ee90c1a1-6c3b-4135-9edc-90955cb1695a" /> Any feedback or guidance on how to manage this memory would be greatly appreciated! ### Steps to reproduce the bug WIP, I'll add some code that manage to reproduce this error, but not straightforward. ### Expected behavior The memory usage should remain relatively constant or plateau after a few steps. Memory used for sampling one dataset should be released before or during the sampling of the next dataset. ### Environment info Python: 3.12 Datasets: 4.3.0 SentenceTransformers: 5.1.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7829/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7829/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7821/comments
https://api.github.com/repos/huggingface/datasets/issues/7821/events
https://github.com/huggingface/datasets/issues/7821
3,520,913,195
I_kwDODunzps7R3N8r
7,821
Building a dataset with large variable size arrays results in error ArrowInvalid: Value X too large to fit in C integer type
{ "avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4", "events_url": "https://api.github.com/users/kkoutini/events{/privacy}", "followers_url": "https://api.github.com/users/kkoutini/followers", "following_url": "https://api.github.com/users/kkoutini/following{/other_user}", "gists_url": "https://api.github.com/users/kkoutini/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kkoutini", "id": 51880718, "login": "kkoutini", "node_id": "MDQ6VXNlcjUxODgwNzE4", "organizations_url": "https://api.github.com/users/kkoutini/orgs", "received_events_url": "https://api.github.com/users/kkoutini/received_events", "repos_url": "https://api.github.com/users/kkoutini/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kkoutini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kkoutini/subscriptions", "type": "User", "url": "https://api.github.com/users/kkoutini", "user_view_type": "public" }
[]
open
false
null
[]
[ "Thanks for reporting ! You can fix this by specifying the output type explicitly and use `LargeList` which uses int64 for offsets:\n\n```python\nfeatures = Features({\"audio\": LargeList(Value(\"uint16\"))})\nds = ds.map(..., features=features)\n```\n\nIt would be cool to improve `list_of_pa_arrays_to_pyarrow_listarray()` to automatically use `LargeList` if the lists are longer than the int32 limit though. Contributions are welcome if you'd like to improve it" ]
2025-10-16T08:45:17
2025-10-20T13:42:05
null
CONTRIBUTOR
null
null
null
null
### Describe the bug I used map to store raw audio waveforms of variable lengths in a column of a dataset the `map` call fails with ArrowInvalid: Value X too large to fit in C integer type. ``` Traceback (most recent call last): Traceback (most recent call last): File "...lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) ^^^^^^^^^^^^^^^^^^^ File "...lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): ^^^^^^^^^^^^^^^^^^^^^^^^^ File "...lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3526, in _map_single writer.write_batch(batch) File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 605, in write_batch arrays.append(pa.array(typed_sequence)) ^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/array.pxi", line 252, in pyarrow.lib.array File "pyarrow/array.pxi", line 114, in pyarrow.lib._handle_arrow_array_protocol File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 225, in __arrow_array__ out = list_of_np_array_to_pyarrow_listarray(data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...lib/python3.12/site-packages/datasets/features/features.py", line 1538, in list_of_np_array_to_pyarrow_listarray return list_of_pa_arrays_to_pyarrow_listarray( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...lib/python3.12/site-packages/datasets/features/features.py", line 1530, in list_of_pa_arrays_to_pyarrow_listarray offsets = pa.array(offsets, type=pa.int32()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/array.pxi", line 362, in pyarrow.lib.array File "pyarrow/array.pxi", line 87, in pyarrow.lib._ndarray_to_array File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Value 2148479376 too large to fit in C integer type ``` ### Steps to reproduce the bug Calling map on a dataset that returns a column with long 1d numpy arrays of variable length. Example: ```python # %% import logging import datasets import pandas as pd import numpy as np # %% def process_batch(batch, rank): res = [] for _ in batch["id"]: res.append(np.zeros((2**30)).astype(np.uint16)) return {"audio": res} if __name__ == "__main__": df = pd.DataFrame( { "id": list(range(400)), } ) ds = datasets.Dataset.from_pandas(df) try: from multiprocess import set_start_method set_start_method("spawn") except RuntimeError: print("Spawn method already set, continuing...") mapped_ds = ds.map( process_batch, batched=True, batch_size=2, with_rank=True, num_proc=2, cache_file_name="path_to_cache/tmp.arrow", writer_batch_size=200, remove_columns=ds.column_names, # disable_nullable=True, ) ``` ### Expected behavior I think the offsets should be pa.int64() if needed and not forced to be `pa.int32()` in https://github.com/huggingface/datasets/blob/3e13d30823f8ec498d56adbc18c6880a5463b313/src/datasets/features/features.py#L1535 ### Environment info - `datasets` version: 3.3.1 - Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35 - Python version: 3.12.9 - `huggingface_hub` version: 0.29.0 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7821/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7821/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7819/comments
https://api.github.com/repos/huggingface/datasets/issues/7819/events
https://github.com/huggingface/datasets/issues/7819
3,517,086,110
I_kwDODunzps7Ronme
7,819
Cannot download opus dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/51946663?v=4", "events_url": "https://api.github.com/users/liamsun2019/events{/privacy}", "followers_url": "https://api.github.com/users/liamsun2019/followers", "following_url": "https://api.github.com/users/liamsun2019/following{/other_user}", "gists_url": "https://api.github.com/users/liamsun2019/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/liamsun2019", "id": 51946663, "login": "liamsun2019", "node_id": "MDQ6VXNlcjUxOTQ2NjYz", "organizations_url": "https://api.github.com/users/liamsun2019/orgs", "received_events_url": "https://api.github.com/users/liamsun2019/received_events", "repos_url": "https://api.github.com/users/liamsun2019/repos", "site_admin": false, "starred_url": "https://api.github.com/users/liamsun2019/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liamsun2019/subscriptions", "type": "User", "url": "https://api.github.com/users/liamsun2019", "user_view_type": "public" }
[]
open
false
null
[]
[ "Hi ! it seems \"en-zh\" doesn't exist for this dataset\n\nYou can see the list of subsets here: https://huggingface.co/datasets/Helsinki-NLP/opus_books" ]
2025-10-15T09:06:19
2025-10-20T13:45:16
null
NONE
null
null
null
null
When I tried to download opus_books using: from datasets import load_dataset dataset = load_dataset("Helsinki-NLP/opus_books") I got the following errors: FileNotFoundError: Couldn't find any data file at /workspace/Helsinki-NLP/opus_books. Couldn't find 'Helsinki-NLP/opus_books' on the Hugging Face Hub either: LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. I also tried: dataset = load_dataset("opus_books", "en-zh") and the errors remain the same. However, I can download "mlabonne/FineTome-100k" successfully. My datasets is version 4.2.0 Any clues? Big thanks.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7819/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7819/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7818/comments
https://api.github.com/repos/huggingface/datasets/issues/7818/events
https://github.com/huggingface/datasets/issues/7818
3,515,887,618
I_kwDODunzps7RkDAC
7,818
train_test_split and stratify breaks with Numpy 2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/24845694?v=4", "events_url": "https://api.github.com/users/davebulaval/events{/privacy}", "followers_url": "https://api.github.com/users/davebulaval/followers", "following_url": "https://api.github.com/users/davebulaval/following{/other_user}", "gists_url": "https://api.github.com/users/davebulaval/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davebulaval", "id": 24845694, "login": "davebulaval", "node_id": "MDQ6VXNlcjI0ODQ1Njk0", "organizations_url": "https://api.github.com/users/davebulaval/orgs", "received_events_url": "https://api.github.com/users/davebulaval/received_events", "repos_url": "https://api.github.com/users/davebulaval/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davebulaval/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davebulaval/subscriptions", "type": "User", "url": "https://api.github.com/users/davebulaval", "user_view_type": "public" }
[]
closed
false
null
[]
[ "I can't reproduce this. Could you pls provide an example with a public dataset/artificial dataset and show how you loaded that?\n\nThis works for me:\n\n```python\nimport numpy as np\nfrom datasets import Dataset, Features, ClassLabel, Value\n\ndata = {\"text\": [f\"sample_{i}\" for i in range(100)], \"label\": [i % 3 for i in range(100)]}\nfeatures = Features({\"text\": Value(\"string\"),\n \"label\": ClassLabel(names=[\"class_0\", \"class_1\", \"class_2\"])})\ndataset = Dataset.from_dict(data, features=features)\nsplits = dataset.train_test_split(test_size=0.2, stratify_by_column=\"label\")\nprint(f\"Success with numpy {np.__version__}\")\n```\nbut it also works for `numpy<2`", "@davebulaval tried with numpy 2.3.4, and maybe i have successfully reproduced the bug!\n```\nValueError: Unable to avoid copy while creating an array as requested.\nIf using `np.array(obj, copy=False)` replace it with `np.asarray(obj)` to allow a copy when needed (no behavior change in NumPy 1.x).\nFor more details, see https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword.\n```\n\nAlso i downgraded to numpy 1.26.4\n```\n(hf-reproduce) F:\\Python\\Machine learning\\reproducing>python repro.py\nDatasetDict({\n train: Dataset({\n features: ['text', 'label'],\n num_rows: 16\n })\n test: Dataset({\n features: ['text', 'label'],\n num_rows: 4\n })\n})\n```", "Also @CloseChoice The bug only appears in cases where the Arrow array cannot be represented as a contiguous NumPy array without copying.\n\nSo closing the discussion loop here - \n\nThe error occurs because `train_test_split(..., stratify_by_column=...)` attempts to convert\nan Arrow column to a NumPy array using `np.array(..., copy=False)`.\n\nIn NumPy <2.0 this silently allowed a copy if needed.\nIn NumPy ≥2.0 this raises:\nValueError: Unable to avoid copy while creating an array as requested.\n\nThis only happens when the Arrow column is not contiguous in memory, which explains\nwhy some datasets reproduce it and others do not." ]
2025-10-15T00:01:19
2025-10-28T16:10:44
2025-10-28T16:10:44
NONE
null
null
null
null
### Describe the bug As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break. e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error. It works if you downgrade Numpy to a version lower than 2.0. ### Steps to reproduce the bug 1. Numpy > 2.0 2. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` ### Expected behavior It returns a stratified split as per the results of Numpy < 2.0 ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.35 - Python version: 3.13.7 - Huggingface_hub version: 0.34.4 - PyArrow version: 19.0.0 - Pandas version: 2.3.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7818/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7818/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
13 days, 16:09:25
https://api.github.com/repos/huggingface/datasets/issues/7816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7816/comments
https://api.github.com/repos/huggingface/datasets/issues/7816/events
https://github.com/huggingface/datasets/issues/7816
3,512,210,206
I_kwDODunzps7RWBMe
7,816
disable_progress_bar() not working as expected
{ "avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4", "events_url": "https://api.github.com/users/windmaple/events{/privacy}", "followers_url": "https://api.github.com/users/windmaple/followers", "following_url": "https://api.github.com/users/windmaple/following{/other_user}", "gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/windmaple", "id": 5577741, "login": "windmaple", "node_id": "MDQ6VXNlcjU1Nzc3NDE=", "organizations_url": "https://api.github.com/users/windmaple/orgs", "received_events_url": "https://api.github.com/users/windmaple/received_events", "repos_url": "https://api.github.com/users/windmaple/repos", "site_admin": false, "starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/windmaple/subscriptions", "type": "User", "url": "https://api.github.com/users/windmaple", "user_view_type": "public" }
[]
closed
false
null
[]
[ "@xianbaoqian ", "Closing this one since it's a Xet issue." ]
2025-10-14T03:25:39
2025-10-14T23:49:26
2025-10-14T23:49:26
NONE
null
null
null
null
### Describe the bug Hi, I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue). In contract, disabling progress bar for snapshot_download() works as expected as in [here](https://www.kaggle.com/code/windmaple/snapshot-download-error). ### Steps to reproduce the bug See this [notebook](https://www.kaggle.com/code/windmaple/hf-datasets-issue). There is sth. wrong with `shell_paraent`. ### Expected behavior The downloader should disable progress bar and move forward w/ no error. ### Environment info The latest version as I did: !pip install -U datasets ipywidgets ipykernel
{ "avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4", "events_url": "https://api.github.com/users/windmaple/events{/privacy}", "followers_url": "https://api.github.com/users/windmaple/followers", "following_url": "https://api.github.com/users/windmaple/following{/other_user}", "gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/windmaple", "id": 5577741, "login": "windmaple", "node_id": "MDQ6VXNlcjU1Nzc3NDE=", "organizations_url": "https://api.github.com/users/windmaple/orgs", "received_events_url": "https://api.github.com/users/windmaple/received_events", "repos_url": "https://api.github.com/users/windmaple/repos", "site_admin": false, "starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/windmaple/subscriptions", "type": "User", "url": "https://api.github.com/users/windmaple", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7816/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7816/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
20:23:47
https://api.github.com/repos/huggingface/datasets/issues/7813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7813/comments
https://api.github.com/repos/huggingface/datasets/issues/7813/events
https://github.com/huggingface/datasets/issues/7813
3,503,446,288
I_kwDODunzps7Q0lkQ
7,813
Caching does not work when using python3.14
{ "avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4", "events_url": "https://api.github.com/users/intexcor/events{/privacy}", "followers_url": "https://api.github.com/users/intexcor/followers", "following_url": "https://api.github.com/users/intexcor/following{/other_user}", "gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/intexcor", "id": 142020129, "login": "intexcor", "node_id": "U_kgDOCHcOIQ", "organizations_url": "https://api.github.com/users/intexcor/orgs", "received_events_url": "https://api.github.com/users/intexcor/received_events", "repos_url": "https://api.github.com/users/intexcor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/intexcor/subscriptions", "type": "User", "url": "https://api.github.com/users/intexcor", "user_view_type": "public" }
[]
closed
false
null
[]
[ "https://github.com/uqfoundation/dill/issues/725", "@intexcor does #7817 fix your problem?" ]
2025-10-10T15:36:46
2025-10-27T17:08:26
2025-10-27T17:08:26
NONE
null
null
null
null
### Describe the bug Traceback (most recent call last): File "/workspace/ctn.py", line 8, in <module> ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # или "synthdog-zh" для китайского File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset builder_instance = load_dataset_builder( path=path, ...<10 lines>... **config_kwargs, ) File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder builder_instance._use_legacy_cache_dir_if_possible(dataset_module) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^ File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 612, in _use_legacy_cache_dir_if_possible self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^ File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 485, in _check_legacy_cache2 config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files}) ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash return cls.hash_bytes(dumps(value)) ~~~~~^^^^^^^ File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps dump(obj, file) ~~~~^^^^^^^^^^^ File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump Pickler(file, recurse=True).dump(obj) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^ File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump StockPickler.dump(self, obj) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^ File "/usr/lib/python3.14/pickle.py", line 498, in dump self.save(obj) ~~~~~~~~~^^^^^ File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save StockPickler.save(self, obj, save_persistent_id) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.14/pickle.py", line 572, in save f(self, obj) # Call unbound method with explicit self ~^^^^^^^^^^^ File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict StockPickler.save_dict(pickler, obj) ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^ File "/usr/lib/python3.14/pickle.py", line 1064, in save_dict self._batch_setitems(obj.items(), obj) ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^ TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given ### Steps to reproduce the bug ds_train = ds["train"].map(lambda x: {**x, "lang": lang}) ### Expected behavior Fixed bugs ### Environment info - `datasets` version: 4.2.0 - Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.39 - Python version: 3.14.0 - `huggingface_hub` version: 0.35.3 - PyArrow version: 21.0.0 - Pandas version: 2.3.3 - `fsspec` version: 2025.9.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7813/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7813/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
17 days, 1:31:40
https://api.github.com/repos/huggingface/datasets/issues/7811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7811/comments
https://api.github.com/repos/huggingface/datasets/issues/7811/events
https://github.com/huggingface/datasets/issues/7811
3,500,741,658
I_kwDODunzps7QqRQa
7,811
SIGSEGV when Python exits due to near null deref
{ "avatar_url": "https://avatars.githubusercontent.com/u/5192353?v=4", "events_url": "https://api.github.com/users/iankronquist/events{/privacy}", "followers_url": "https://api.github.com/users/iankronquist/followers", "following_url": "https://api.github.com/users/iankronquist/following{/other_user}", "gists_url": "https://api.github.com/users/iankronquist/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iankronquist", "id": 5192353, "login": "iankronquist", "node_id": "MDQ6VXNlcjUxOTIzNTM=", "organizations_url": "https://api.github.com/users/iankronquist/orgs", "received_events_url": "https://api.github.com/users/iankronquist/received_events", "repos_url": "https://api.github.com/users/iankronquist/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iankronquist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iankronquist/subscriptions", "type": "User", "url": "https://api.github.com/users/iankronquist", "user_view_type": "public" }
[]
open
false
null
[]
[ "The issue seems to come from `dill` which is a `datasets` dependency, e.g. this segfaults:\n\n```python\nimport dill\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\n`tqdm` seems to segfault when `dill` is imported. I only found this about segfault but it's maybe not related https://github.com/tqdm/tqdm/issues/1678 ?", "After more investigation it seems to be because of it imports `__main__`. This segfaults:\n\n```python\nimport __main__\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\nI opened an issue at https://github.com/tqdm/tqdm/issues/1687", "Here is a workaround. You can run your code as long as the progress bar is closed before exiting.\n\n```python\nfrom datasets import load_dataset\nfrom tqdm import tqdm\n\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\nprogress_bar.close() # avoids the segfault\n```", "https://github.com/tqdm/tqdm/issues/1687#issuecomment-3392457094" ]
2025-10-09T22:00:11
2025-10-10T22:09:24
null
NONE
null
null
null
null
### Describe the bug When I run the following python script using datasets I get a segfault. ```python from datasets import load_dataset from tqdm import tqdm progress_bar = tqdm(total=(1000), unit='cols', desc='cols ') progress_bar.update(1) ``` ``` % lldb -- python3 crashmin.py (lldb) target create "python3" Current executable set to '/Users/ian/bug/venv/bin/python3' (arm64). (lldb) settings set -- target.run-args "crashmin.py" (lldb) r Process 8095 launched: '/Users/ian/bug/venv/bin/python3' (arm64) Process 8095 stopped * thread #2, stop reason = exec frame #0: 0x0000000100014b30 dyld`_dyld_start dyld`_dyld_start: -> 0x100014b30 <+0>: mov x0, sp 0x100014b34 <+4>: and sp, x0, #0xfffffffffffffff0 0x100014b38 <+8>: mov x29, #0x0 ; =0 Target 0: (Python) stopped. (lldb) c Process 8095 resuming cols : 0% 0/1000 [00:00<?, ?cols/s]Process 8095 stopped * thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10) frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188 _datetime.cpython-313-darwin.so`delta_new: -> 0x101783454 <+188>: ldr x3, [x20, #0x10] 0x101783458 <+192>: adrp x0, 10 0x10178345c <+196>: add x0, x0, #0x6fc ; "seconds" Target 0: (Python) stopped. (lldb) bt * thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10) * frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188 frame #1: 0x0000000100704b60 Python`type_call + 96 frame #2: 0x000000010067ba34 Python`_PyObject_MakeTpCall + 120 frame #3: 0x00000001007aae3c Python`_PyEval_EvalFrameDefault + 30236 frame #4: 0x000000010067c900 Python`PyObject_CallOneArg + 112 frame #5: 0x000000010070f0a0 Python`slot_tp_finalize + 116 frame #6: 0x000000010070c3b4 Python`subtype_dealloc + 788 frame #7: 0x00000001006c378c Python`insertdict + 756 frame #8: 0x00000001006db2b0 Python`_PyModule_ClearDict + 660 frame #9: 0x000000010080a9a8 Python`finalize_modules + 1772 frame #10: 0x0000000100809a44 Python`_Py_Finalize + 264 frame #11: 0x0000000100837630 Python`Py_RunMain + 252 frame #12: 0x0000000100837ef8 Python`pymain_main + 304 frame #13: 0x0000000100837f98 Python`Py_BytesMain + 40 frame #14: 0x000000019cfcc274 dyld`start + 2840 (lldb) register read x20 x20 = 0x0000000000000000 (lldb) ``` ### Steps to reproduce the bug Run the script above, and observe the segfault. ### Expected behavior No segfault ### Environment info ``` % pip freeze datasets | grep -i datasets datasets==4.2.0 (venv) 0 ~/bug 14:58:06 % pip freeze tqdm | grep -i tqdm tqdm==4.67.1 (venv) 0 ~/bug 14:58:16 % python --version Python 3.13.7 ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7811/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7811/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7804/comments
https://api.github.com/repos/huggingface/datasets/issues/7804/events
https://github.com/huggingface/datasets/issues/7804
3,498,534,596
I_kwDODunzps7Qh2bE
7,804
Support scientific data formats
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
open
false
null
[]
[ "Please add the support for `Zarr`! That's what we use in the Bioimaging community. It is crucial, because raw upload of a *single* bio image can take _terrabytes in memory_!\n\nThe python library would be `bioio` or `zarr`:\n- [ ] Zarr: `bioio` or `zarr`\n\nSee a [Zarr example](https://ome.github.io/ome-ngff-validator/?source=https://uk1s3.embassy.ebi.ac.uk/bia-integrator-data/S-BIAD845/796b9fb8-f4ec-4c4b-bfc3-5cb00ccf19fe/796b9fb8-f4ec-4c4b-bfc3-5cb00ccf19fe.zarr)\n\ncc @joshmoore", "@stefanches7 `zarr` is already usable with the hf hub as an array store. See this example from the [docs](https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system):\n\n```python\nimport numpy as np\nimport zarr\n\nembeddings = np.random.randn(50000, 1000).astype(\"float32\")\n\n# Write an array to a repo\nwith zarr.open_group(\"hf://my-username/my-model-repo/array-store\", mode=\"w\") as root:\n foo = root.create_group(\"embeddings\")\n foobar = foo.zeros('experiment_0', shape=(50000, 1000), chunks=(10000, 1000), dtype='f4')\n foobar[:] = embeddings\n\n# Read an array from a repo\nwith zarr.open_group(\"hf://my-username/my-model-repo/array-store\", mode=\"r\") as root:\n first_row = root[\"embeddings/experiment_0\"][0]\n```\n\nIs there additional functionality that would not be covered by this?", "@cakiki I think some tiling capabilities, as well as metadata / labels handling. Consult ome-zarr doc here: https://ome-zarr.readthedocs.io/en/stable/python.html\nVisualization would be the cherry on the top. \n\ncc @joshmoore @lubianat @St3V0Bay: curious what you think", "zarr-specific dataset viewer would be very cool", "A support for BIDS it would be perfect, I think it's possible to do all the biosinal can be done with mne. There's a cool community for decoding brain signals, and now with EMG. The new META bracelet EMG is saving things in BIDS.\n\nI can help to interface, coding and try to make this happen. I am available at hugging face discord with the username aristimunha, if some 1-to-1 discuss it would be necessary :)", "@lhoestq , @cakiki , do you think we can make this happen?", "If you give me the OK, I'll create the PR to make everything for a Biosignal Reader logic, I already studied the nilabel PR :)", "That would be an amazing addition ! Feel free to ping me in your PR for review or if you have questions / if I can help", "@bruAristimunha @lhoestq I've recalled a gold of a resource for BIDS: https://openneuro.org/\n\nDo you think there is a data-easy way to make those visible here on HuggingFace? Afaik they use `datalad` to fetch the data. Maybe the best way is to leave OpenNeuro as-is, not connecting it to HuggingFace at all - just an idea I had spontaneously.", "I know an \"easy\" way to create interoperability with all biosignal datasets from OpenNeuro =) \n\nFor biosignal data, we can use [EEGDash](https://eegdash.org/) to create a Pytorch dataset, which automates fetch, lazy read, and converts to a pytorch dataset. \n\nI have a question about the best serialization for a Hugging Face dataset, but I can discuss it with some of you on Discord; my username is aristimunha.", "I can explain it publicly too, but I think a short 5-minute conversation would be better than many, many texts to explain the details.", "It's ok to have discussions in one place here (or in a separate issue if it's needed) - I also generally check github more often than discord ^^'", "Hi @bruAristimunha @lhoestq any way we could proceed on this?\nI see someone posted a Nifti vizualization PR: https://github.com/huggingface/datasets/pull/7874 - I think it would be a shame if we couldn't accompany that by a neat way to import BIDS Nifti!", "@stefanches7 author of #7874 here, would be open to expand the current support to BIDS as well after having a brief look. \nMaybe having a brief call over Discord (my username: TobiasPitters on the huggingface discord server) might help sorting things out, since I am not familiar with BIDS. So getting an understanding over test cases needed, etc. would be great!", "Hey!!\n\nFrom a bids perspective, I can provide full support for all biosignal types (EEG, iEEG, MEG, EMG). BIDS is a well-established contract format; I believe we can design something that supports the entire medical domain. I think it just requires a few details to be aligned.\n\nFrom my perspective, the tricky part is how to best adapt and serialize from the Hugging Face perspective.\n\nUnder the hood, for the biosignal part, I think I would use [mne](https://mne.tools/) for interoperability and [eegdash](https://eegdash.org/dataset_summary.html) to create the serialized dataset, but we can definitely discuss this further. I will ping you @CloseChoice on Discord.", "had a discussion with @neurolabusc and here's a quick wrap-up:\n - BIDS support would be huge (@bruAristimunha would be great if we could catch up on that)\n - DICOM support as well, but that might be harder due to a lot of variety in how headers are handled, vendor specifics etc. So to have a reliable pipeline to interact with whole folders of DICOM files (including metadata) would require a lot of work and a lot of testing. Therefore I set https://github.com/huggingface/datasets/pull/7835 back to draft mode. But there are tools that ease the way, especially https://github.com/ImagingDataCommons/highdicom (or potentially https://github.com/QIICR/dcmqi). \n - Getting users would help in order to understand what other formats/features are required therefore loading a bunch of open datasets to the hub using the new Nifti feature would be great. Some tutorials might help here as well.", "Hi @CloseChoice and @bruAristimunha, glad to meet you both! We could appoint a call; I am currently moving to a new job, so the time slots are limited, but let's connect over Discord and see what we could do.\n\n* BIDS: our hackathon team @zuazo @ekarrieta @lakshya16157 put up a BIDS format converter: https://huggingface.co/spaces/stefanches/OpenBIDSifier. Might be useful for imaging dataset conversion to BIDS.\n* DICOM support: cc @St3V0Bay, the author of DICOM support in CroissantML (https://github.com/mlcommons/croissant/pull/942)\n\ncc @nolden", "my username is aristimunha within the huggieng face discord to discuss more" ]
2025-10-09T10:18:24
2025-11-26T16:09:43
null
MEMBER
null
null
null
null
List of formats and libraries we can use to load the data in `datasets`: - [ ] DICOMs: pydicom - [x] NIfTIs: nibabel - [ ] WFDB: wfdb cc @zaRizk7 for viz Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 5, "hooray": 4, "laugh": 0, "rocket": 0, "total_count": 10, "url": "https://api.github.com/repos/huggingface/datasets/issues/7804/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7804/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7802/comments
https://api.github.com/repos/huggingface/datasets/issues/7802/events
https://github.com/huggingface/datasets/issues/7802
3,497,454,119
I_kwDODunzps7Qduon
7,802
[Docs] Missing documentation for `Dataset.from_dict`
{ "avatar_url": "https://avatars.githubusercontent.com/u/69421545?v=4", "events_url": "https://api.github.com/users/aaronshenhao/events{/privacy}", "followers_url": "https://api.github.com/users/aaronshenhao/followers", "following_url": "https://api.github.com/users/aaronshenhao/following{/other_user}", "gists_url": "https://api.github.com/users/aaronshenhao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aaronshenhao", "id": 69421545, "login": "aaronshenhao", "node_id": "MDQ6VXNlcjY5NDIxNTQ1", "organizations_url": "https://api.github.com/users/aaronshenhao/orgs", "received_events_url": "https://api.github.com/users/aaronshenhao/received_events", "repos_url": "https://api.github.com/users/aaronshenhao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aaronshenhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aaronshenhao/subscriptions", "type": "User", "url": "https://api.github.com/users/aaronshenhao", "user_view_type": "public" }
[]
open
false
null
[]
[ "I'd like to work on this documentation issue.", "Hi I'd like to work on this. I can see the docstring is already in the code. \nCould you confirm:\n1. Is this still available?\n2. Should I add this to the main_classes.md file, or is there a specific \n documentation file I should update?\n3. Are there any formatting guidelines I should follow?\n\nI'm new to contributing but eager to learn!" ]
2025-10-09T02:54:41
2025-10-19T16:09:33
null
NONE
null
null
null
null
Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029 The docstring is present for the function, but seems missing from the official documentation for the `Dataset` class on HuggingFace. The method in question: ```python @classmethod def from_dict( cls, mapping: dict, features: Optional[Features] = None, info: Optional[DatasetInfo] = None, split: Optional[NamedSplit] = None, ) -> "Dataset": """ Convert `dict` to a `pyarrow.Table` to create a [`Dataset`]. Important: a dataset created with from_dict() lives in memory and therefore doesn't have an associated cache directory. This may change in the future, but in the meantime if you want to reduce memory usage you should write it back on disk and reload using e.g. save_to_disk / load_from_disk. Args: mapping (`Mapping`): Mapping of strings to Arrays or Python lists. features ([`Features`], *optional*): Dataset features. info (`DatasetInfo`, *optional*): Dataset information, like description, citation, etc. split (`NamedSplit`, *optional*): Name of the dataset split. Returns: [`Dataset`] """ ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7802/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7802/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
https://api.github.com/repos/huggingface/datasets/issues/7798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7798/comments
https://api.github.com/repos/huggingface/datasets/issues/7798/events
https://github.com/huggingface/datasets/issues/7798
3,484,470,782
I_kwDODunzps7PsM3-
7,798
Audio dataset is not decoding on 4.1.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4", "events_url": "https://api.github.com/users/thewh1teagle/events{/privacy}", "followers_url": "https://api.github.com/users/thewh1teagle/followers", "following_url": "https://api.github.com/users/thewh1teagle/following{/other_user}", "gists_url": "https://api.github.com/users/thewh1teagle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thewh1teagle", "id": 61390950, "login": "thewh1teagle", "node_id": "MDQ6VXNlcjYxMzkwOTUw", "organizations_url": "https://api.github.com/users/thewh1teagle/orgs", "received_events_url": "https://api.github.com/users/thewh1teagle/received_events", "repos_url": "https://api.github.com/users/thewh1teagle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thewh1teagle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thewh1teagle/subscriptions", "type": "User", "url": "https://api.github.com/users/thewh1teagle", "user_view_type": "public" }
[]
open
false
null
[]
[ "Previously (datasets<=3.6.0), audio columns were decoded automatically when accessing a row. Now, for performance reasons, audio decoding is lazy by default: you just see the file path unless you explicitly cast the column to Audio.\n\nHere’s the fix (following the current [datasets audio docs](https://huggingface.co/docs/datasets/en/audio_load)\n):\n\n```\nfrom datasets import load_dataset, Audio\n\ndataset = load_dataset(\"MrDragonFox/Elise\", split=\"train\")\n\n# Explicitly decode the audio column\ndataset = dataset.cast_column(\"audio\", Audio(sampling_rate=16_000))\n\nprint(dataset[0][\"audio\"])\n# {'path': '...', 'array': array([...], dtype=float32), 'sampling_rate': 16000}\n```", "@haitam03-yo's comment is right that the data is not decoded by default anymore indeed, but here is how it works in practice now:\n\nFrom `datasets` v4, audio data are read as [AudioDecoder](https://meta-pytorch.org/torchcodec/0.4/generated/torchcodec.decoders.AudioDecoder.html) objects from torchcodec. This doesn't decode the data by default, but you can call `audio.get_all_samples()` to decode the audio.\n\nSee the documentation on how to process audio data here: https://huggingface.co/docs/datasets/audio_process", "To resolve this, you need to explicitly cast the audio column to the Audio feature. This will decode the audio data and make it accessible as an array. Here is the corrected code snippet\n\n\nfrom datasets import load_dataset, Audio\n\n# Load your dataset\ndataset = load_dataset(\"MrDragonFox/Elise\", split=\"train\")\n\n# Explicitly cast the 'audio' column to the Audio feature\ndataset = dataset.cast_column(\"audio\", Audio(sampling_rate=16_000))\n\n# Now you can access the decoded audio array\nprint(dataset[0][\"audio\"])\n\nBy adding the cast_column step, you are telling the datasets library to decode the audio data with the specified sampling rate, and you will then be able to access the audio array as you were used to in previous versions." ]
2025-10-05T06:37:50
2025-10-06T14:07:55
null
NONE
null
null
null
null
### Describe the bug The audio column remain as non-decoded objects even when accessing them. ```python dataset = load_dataset("MrDragonFox/Elise", split = "train") dataset[0] # see that it doesn't show 'array' etc... ``` Works fine with `datasets==3.6.0` Followed the docs in - https://huggingface.co/docs/datasets/en/audio_load ### Steps to reproduce the bug ```python dataset = load_dataset("MrDragonFox/Elise", split = "train") dataset[0] # see that it doesn't show 'array' etc... ``` ### Expected behavior It should decode when accessing the elemenet ### Environment info 4.1.1 ubuntu 22.04 Related - https://github.com/huggingface/datasets/issues/7707
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7798/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7798/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
null
End of preview. Expand in Data Studio

Dataset Card for Dataset Name

Dataset Details

Dataset Description

This dataset is a collection of GitHub Issue and Pull Request metadata scraped from the public huggingface/datasets repository using the GitHub REST API. It contains 3,261 distinct issue records (excluding pull requests) that were extracted from an initial fetch of 7,808 samples. The data provides a rich corpus of real-world technical communication focused on open-source software development, covering the repository's activity up to late 2025.

The data provides a rich corpus of real-world technical communication, primarily in English (en), focused on the open-source software development domain of the datasets library. The dataset was curated to offer a clean, usable source for tasks like issue classification and summarization within a highly technical context.

Uses

This simplified dataset primarily supports tasks based on textual content and intrinsic labels (state).

text-classification: Issue State Classification The dataset can be used to train a model for Issue State Classification, which consists in predicting whether an issue/pull request is currently 'open' or 'closed' based solely on its title and body text. Success on this task is typically measured by achieving a high Accuracy or F1-score.

other:issue-summarization The dataset can be used to train a model for Issue Summarization (Sequence-to-Sequence), which consists in generating a concise title/summary given the longer body text of the issue. Success on this task is typically measured by achieving a high ROUGE-L score.

other:issue-type-classification The dataset can be used to train a model for Issue Type Classification, which consists in classifying whether a record represents a bug, a feature request, or general question (derived or annotated from the title and body fields).

Languages

The primary language represented in the dataset is English (en) (BCP-47 code: en), consistent with standard practice for global open-source projects. The text is technical, informal, and conversational, reflecting developer communication on a platform like GitHub.

Dataset Structure

{ "url": "String", "repository_url": "String", "labels_url": "String", "comments_url": "String", "events_url": "String", "html_url": "String", "id": "Integer", "node_id": "String", "number": "Integer", "title": "String", "user": { "login": "String", "id": "Integer", "node_id": "String", "avatar_url": "String", // ... (other user fields) "site_admin": "Boolean" }, "labels": [ // List of Structs { "name": "String", "color": "String", "id": "Integer" // ... (other label fields) } ], "state": "String", "locked": "Boolean", "assignee": "Null/Struct", "assignees": [ // List of Structs (User objects) ], "milestone": "Null/Struct", "comments": "Integer", "created_at": "Timestamp", "updated_at": "Timestamp", "closed_at": "Timestamp/Null", "author_association": "String", "type": "String", "active_lock_reason": "Null/String", "draft": "Boolean", "pull_request": { "url": "String", "html_url": "String", "diff_url": "String", "patch_url": "String", "merged_at": "Timestamp/Null" }, "body": "String", "closed_by": "Null/Struct", "reactions": { "url": "String", "total_count": "Integer" // ... (other reaction counts) }, "timeline_url": "String", "performed_via_github_app": "Null", "state_reason": "Null/String" }

Data Fields

All original fields are retained except for milestone. This includes complex nested structures and all timestamps.

id: int64. The unique GitHub identifier.

number: int64. The sequential issue/PR number.

title: string. The title of the issue/PR (Primary input).

state: string. The current status (open or closed).

comments: int32. The number of comments.

created_at: timestamp. The creation time.

updated_at: timestamp. The last update time.

closed_at: timestamp. The closure time (null if open).

user: struct. Metadata about the author.

labels: list[struct]. A list of labels applied.

pull_request: struct. Metadata specific to a Pull Request (includes merged_at timestamp).

body: string. The main text description (Primary input).

assignees: list[struct]. A list of assigned users.

reactions: struct. Reaction counts (e.g., +1, heart).

milestone: (DROPPED) This is the only field removed in normalization.

Data Splits

Split Name Sample Type Number of Examples
initial_fetch Issues & PRs 7,808
train Pure Issues 3,261

Criteria for Splitting:

The data was collected as a single stream and is presented as one split (train).

Dataset Creation

Curation Rationale

The dataset was curated to provide a high-quality corpus of pure issues for supervised learning. The primary motivation was to: 1) Separate Issues from PRs to avoid class confusion; 2) Augment Issues with Comments to provide the full conversational context necessary for realistic AI applications like automatic issue response or summarization.

Initial Data Collection and Normalization

Process: Data was collected using the GitHub REST API (/repos/huggingface/datasets/issues) with state=all, resulting in 7,808 total records (issues and PRs).

Separation: The initial records were filtered to include only records where the pull_request field was null, resulting in 3,261 issues.

Augmentation: A subsequent fetch was performed to retrieve all associated comments for each of the 3,261 issues.

Normalization: Only the milestone field was explicitly dropped prior to saving the .jsonl file. All other original fields were retained intact.

Who are the source language producers?

The data was human-generated by developers, engineers, and community members interacting with the huggingface/datasets repository on GitHub.

Personal and Sensitive Information

Fields like user contain publicly available GitHub login names and user IDs, which are considered personal identifiers. The raw text in the body field may contain indirect personal or sensitive information posted by users.

Considerations for Using the Data

Discussion of Biases

The primary bias is Selection Bias: The data is specific to the datasets library and is heavily biased toward technical, engineering, and machine learning terminology.

Additional Information

Licensing Information

The source data is derived from a public GitHub repository under the Apache-2.0 License.

Downloads last month
27