Dataset Viewer
url
stringclasses 6
values | repository_url
stringclasses 1
value | labels_url
stringclasses 6
values | comments_url
stringclasses 6
values | events_url
stringclasses 6
values | html_url
stringclasses 6
values | id
int64 2.93B
2.94B
| node_id
stringclasses 6
values | number
int64 7.46k
7.47k
| title
stringclasses 6
values | user
dict | labels
listlengths 0
1
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
0
| milestone
dict | comments
sequencelengths 0
1
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 2
values | type
null | sub_issues_summary
dict | active_lock_reason
null | body
stringclasses 5
values | closed_by
dict | reactions
dict | timeline_url
stringclasses 6
values | performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 1
class | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7470/comments | https://api.github.com/repos/huggingface/datasets/issues/7470/events | https://github.com/huggingface/datasets/issues/7470 | 2,937,236,323 | I_kwDODunzps6vEqtj | 7,470 | Is it possible to shard a single-sharded IterableDataset? | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-03-21T04:33:37 | 2025-03-21T04:33:37 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not.
Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs more cost too). But the results are also big enough that we don't want to materialize it entirely and instead stream it with an IterableDataset.
But after we have the results we want to split it up across workers to parallelize processing.
Is something like this possible to do?
Here's a failed attempt. The end result should be that each of the shards has unique data, but unfortunately with this attempt the generator gets run once in each shard and the results end up with duplicates...
```
import random
import datasets
def gen():
print('RUNNING GENERATOR!')
items = list(range(10))
random.shuffle(items)
yield from items
ds = datasets.IterableDataset.from_generator(gen)
print('dataset contents:')
for item in ds:
print(item)
print()
print('dataset contents (2):')
for item in ds:
print(item)
print()
num_shards = 3
def sharded(shard_id):
for i, example in enumerate(ds):
if i % num_shards in shard_id:
yield example
ds1 = datasets.IterableDataset.from_generator(
sharded, gen_kwargs={'shard_id': list(range(num_shards))}
)
for shard in range(num_shards):
print('shard', shard)
for item in ds1.shard(num_shards, shard):
print(item)
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7470/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7471/comments | https://api.github.com/repos/huggingface/datasets/issues/7471/events | https://github.com/huggingface/datasets/issues/7471 | 2,937,530,069 | I_kwDODunzps6vFybV | 7,471 | Adding argument to `_get_data_files_patterns` | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2025-03-21T07:17:53 | 2025-03-21T07:17:53 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | ### Feature request
How about adding if the user already know about the pattern?
https://github.com/huggingface/datasets/blob/a256b85cbc67aa3f0e75d32d6586afc507cf535b/src/datasets/data_files.py#L252
### Motivation
While using this load_dataset people might use 10M of images for the local files.
However, due to searching all the appropriate file pattern in fsspec, purely searching this pattern takes more than 10 hours (real use-case).
### Your contribution
Yeah I can make this happen if this seems valid. @lhoestq WDYT?
such like
```
def _get_data_files_patterns(pattern_resolver: Callable[[str], list[str]], patterns: PATTERNS) -> dict[str, list[str]]:
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7471/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7469/comments | https://api.github.com/repos/huggingface/datasets/issues/7469/events | https://github.com/huggingface/datasets/issues/7469 | 2,936,606,080 | I_kwDODunzps6vCQ2A | 7,469 | Custom split name with the web interface | {
"login": "vince62s",
"id": 15141326,
"node_id": "MDQ6VXNlcjE1MTQxMzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/15141326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vince62s",
"html_url": "https://github.com/vince62s",
"followers_url": "https://api.github.com/users/vince62s/followers",
"following_url": "https://api.github.com/users/vince62s/following{/other_user}",
"gists_url": "https://api.github.com/users/vince62s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vince62s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vince62s/subscriptions",
"organizations_url": "https://api.github.com/users/vince62s/orgs",
"repos_url": "https://api.github.com/users/vince62s/repos",
"events_url": "https://api.github.com/users/vince62s/events{/privacy}",
"received_events_url": "https://api.github.com/users/vince62s/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-03-20T20:45:59 | 2025-03-21T07:20:37 | 2025-03-21T07:20:37 | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | ### Describe the bug
According the doc here: https://huggingface.co/docs/hub/datasets-file-names-and-splits#custom-split-name
it should infer the split name from the subdir of data or the beg of the name of the files in data.
When doing this manually through web upload it does not work. it uses "train" as a unique split.
example: https://huggingface.co/datasets/eole-nlp/estimator_chatml
### Steps to reproduce the bug
follow the link above
### Expected behavior
there should be two splits "mlqe" and "1720_da"
### Environment info
website | {
"login": "vince62s",
"id": 15141326,
"node_id": "MDQ6VXNlcjE1MTQxMzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/15141326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vince62s",
"html_url": "https://github.com/vince62s",
"followers_url": "https://api.github.com/users/vince62s/followers",
"following_url": "https://api.github.com/users/vince62s/following{/other_user}",
"gists_url": "https://api.github.com/users/vince62s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vince62s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vince62s/subscriptions",
"organizations_url": "https://api.github.com/users/vince62s/orgs",
"repos_url": "https://api.github.com/users/vince62s/repos",
"events_url": "https://api.github.com/users/vince62s/events{/privacy}",
"received_events_url": "https://api.github.com/users/vince62s/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7469/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7463/comments | https://api.github.com/repos/huggingface/datasets/issues/7463/events | https://github.com/huggingface/datasets/pull/7463 | 2,925,924,452 | PR_kwDODunzps6O-I6K | 7,463 | Adds EXR format to store depth images in float32 | {
"login": "ducha-aiki",
"id": 4803565,
"node_id": "MDQ6VXNlcjQ4MDM1NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4803565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ducha-aiki",
"html_url": "https://github.com/ducha-aiki",
"followers_url": "https://api.github.com/users/ducha-aiki/followers",
"following_url": "https://api.github.com/users/ducha-aiki/following{/other_user}",
"gists_url": "https://api.github.com/users/ducha-aiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ducha-aiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ducha-aiki/subscriptions",
"organizations_url": "https://api.github.com/users/ducha-aiki/orgs",
"repos_url": "https://api.github.com/users/ducha-aiki/repos",
"events_url": "https://api.github.com/users/ducha-aiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/ducha-aiki/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! I'mn wondering if this shouldn't this be an `Image()` type and decoded as a `PIL.Image` ?\r\n\r\nThis would make it easier to integrate with the rest of the HF ecosystem, and you could still get a numpy array using `ds = ds.with_format(\"numpy\")` which sets all the images to be formatted as numpy arrays"
] | 2025-03-17T17:42:40 | 2025-03-18T15:25:30 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | This PR adds the EXR feature to store depth images (or can be normals, etc) in float32.
It relies on [openexr_numpy](https://github.com/martinResearch/openexr_numpy/tree/main) to manipulate EXR images.
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7463/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7463",
"html_url": "https://github.com/huggingface/datasets/pull/7463",
"diff_url": "https://github.com/huggingface/datasets/pull/7463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7463.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7472/comments | https://api.github.com/repos/huggingface/datasets/issues/7472/events | https://github.com/huggingface/datasets/issues/7472 | 2,937,607,272 | I_kwDODunzps6vGFRo | 7,472 | Label casting during `map` process is canceled after the `map` process | {
"login": "yoshitomo-matsubara",
"id": 11156001,
"node_id": "MDQ6VXNlcjExMTU2MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoshitomo-matsubara",
"html_url": "https://github.com/yoshitomo-matsubara",
"followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers",
"following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}",
"gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions",
"organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs",
"repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos",
"events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-03-21T07:56:22 | 2025-03-21T07:58:14 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | ### Describe the bug
When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithLogitsLoss`
However, the casting was canceled after `.map` process and the label values still use int values, which leads to an error
```
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 1711, in forward
loss = loss_fct(logits, labels)
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 819, in forward
return F.binary_cross_entropy_with_logits(
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/functional.py", line 3628, in binary_cross_entropy_with_logits
return torch.binary_cross_entropy_with_logits(
RuntimeError: result type Float can't be cast to the desired output type Long
```
This seems like happening only when the original labels are int values (see examples below)
### Steps to reproduce the bug
If the original dataset uses a list of int labels, it will cancel the int->float casting
```python
from datasets import Dataset
data = {
'text': ['text1', 'text2', 'text3', 'text4'],
'labels': [[0, 1, 2], [3], [3, 4], [3]]
}
dataset = Dataset.from_dict(data)
label_set = set([label for labels in data['labels'] for label in labels])
label2idx = {label: idx for idx, label in enumerate(sorted(label_set))}
def multi_labels_to_ids(labels):
ids = [0.0] * len(label2idx)
for label in labels:
ids[label2idx[label]] = 1.0
return ids
def preprocess(examples):
result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]}
print('"labels" are int', examples['labels'])
result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']]
print('"labels" were converted to multi-label format with float values', result['labels'])
return result
preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text'])
print(preprocessed_dataset[0]['labels'])
# Output: "[1, 1, 1, 0, 0]"
# Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]"
```
If the original dataset uses non-int labels, it works as expected.
```python
from datasets import Dataset
data = {
'text': ['text1', 'text2', 'text3', 'text4'],
'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']]
}
dataset = Dataset.from_dict(data)
label_set = set([label for labels in data['labels'] for label in labels])
label2idx = {label: idx for idx, label in enumerate(sorted(label_set))}
def multi_labels_to_ids(labels):
ids = [0.0] * len(label2idx)
for label in labels:
ids[label2idx[label]] = 1.0
return ids
def preprocess(examples):
result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]}
print('"labels" are int', examples['labels'])
result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']]
print('"labels" were converted to multi-label format with float values', result['labels'])
return result
preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text'])
print(preprocessed_dataset[0]['labels'])
# Output: "[1.0, 1.0, 1.0, 0.0, 0.0]"
# Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]"
```
Note that the only difference between these two examples is
> 'labels': [[0, 1, 2], [3], [3, 4], [3]]
v.s
> 'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']]
### Expected behavior
Even if the original dataset uses a list of int labels, the int->float casting during `.map` process should not be canceled as shown in the above example
### Environment info
OS Ubuntu 22.04 LTS
Python 3.10.11
datasets v3.4.1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7472/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7464/comments | https://api.github.com/repos/huggingface/datasets/issues/7464/events | https://github.com/huggingface/datasets/pull/7464 | 2,926,478,838 | PR_kwDODunzps6PABJa | 7,464 | Minor fix for metadata files in extension counter | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7464). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-03-17T21:57:11 | 2025-03-18T15:21:43 | 2025-03-18T15:21:41 | MEMBER | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7464/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7464",
"html_url": "https://github.com/huggingface/datasets/pull/7464",
"diff_url": "https://github.com/huggingface/datasets/pull/7464.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7464.patch",
"merged_at": "2025-03-18T15:21:41"
} | true |
README.md exists but content is empty.
- Downloads last month
- 113