All the data files must have the same columns, but at some point there are 3 new columns ({'source', 'version', 'metadata'})
Hi, i've encountered a problem when load_dataest():
dataset = load_dataset("allenai/olmo-mix-1124", split="train", trust_remote_code=True, verification_mode=datasets.VerificationMode.NO_CHECKS)
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 28/28 [00:00<00:00, 247243.18it/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 28/28 [00:00<00:00, 224123.11it/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 28/28 [00:00<00:00, 32.52files/s]
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/home//.local/lib/python3.10/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
writer.write_table(table)
File "/home//.local/lib/python3.10/site-packages/datasets/arrow_writer.py", line 622, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home//.local/lib/python3.10/site-packages/datasets/table.py", line 2292, in table_cast
return cast_table_to_schema(table, schema)
File "/home//.local/lib/python3.10/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
added: string
created: string
id: string
metadata: struct<abstract: string, abstract_count: int64, abstract_language: string, abstract_perplexity: double, extfieldsofstudy: list<item: string>, provenance: string, s2fieldsofstudy: list<item: string>, sha1: string, sources: list<item: string>, title: string, title_count: int64, title_language: string, title_perplexity: double, top_frequencies: list<item: struct<count: int64, token: string>>, year: int64>
child 0, abstract: string
child 1, abstract_count: int64
child 2, abstract_language: string
child 3, abstract_perplexity: double
child 4, extfieldsofstudy: list<item: string>
child 0, item: string
child 5, provenance: string
child 6, s2fieldsofstudy: list<item: string>
child 0, item: string
child 7, sha1: string
child 8, sources: list<item: string>
child 0, item: string
child 9, title: string
child 10, title_count: int64
child 11, title_language: string
child 12, title_perplexity: double
child 13, top_frequencies: list<item: struct<count: int64, token: string>>
child 0, item: struct<count: int64, token: string>
child 0, count: int64
child 1, token: string
child 14, year: int64
source: string
text: string
version: string
to
{'id': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'added': Value(dtype='string', id=None), 'created': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home//.local/lib/python3.10/site-packages/datasets/load.py", line 2151, in load_dataset
builder_instance.download_and_prepare(
File "/home//.local/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/home//.local/lib/python3.10/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home//.local/lib/python3.10/site-packages/datasets/builder.py", line 1741, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home//.local/lib/python3.10/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 new columns ({'source', 'version', 'metadata'})
This happened while the json dataset builder was generating data using
/home//.cache/huggingface/hub/datasets--allenai--olmo-mix-1124/snapshots/166bbe2db8563a30388e37b4cfd753b8252f3622/data/pes2o/pes2o-0000.json.gz
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Could you please help?
A workaround (which is not ideal) is to manually define specific data files and features when loading. For example:
from datasets import load_dataset, Features, Value
features = Features({
'text': Value('string'),
'id': Value('string'),
'metadata': Features({
'length': Value('int64'),
'provenance': Value('string'),
'revid': Value('string'),
'url': Value('string')
}),
...
})
dataset = load_dataset("allenai/olmo-mix-1124", data_files="data/wiki/*", features=features)
There are actually a few mistakes in the dataset metadata that cause errors when loading this dataset. For instance, the configuration (in README) specifies:
- config_name: dclm
data_files:
- split: train
path: data/dclm/*
but the actual data files are located at:data/dclm/raw/hero-run-fasttext_for_HF/filtered/OH_eli5_vs_rw_v2_bigram_200k_train/fasttext_openhermes_reddit_eli5_vs_rw_v2_bigram_200k_train/processed_data/global-shard_[[xx]]_of_10/local-shard_[[x]]_of_10
I think that's why the preview fails for this subset, too. Hope they'll fix itππ»
Hi, thanks all for the feedback, and apologies for the inconvenience. We have put in some work to solve this issue and it should now be resolved!