The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status response.raise_for_status() File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/b8/25/b825b79cba6df57c96ede47fe214933941eee569f90e25eaf1f1098e998e2a7e/a0f754eea66e74a607905495d1ab0b200a38c952ebb5c680c8400b5129d81f76?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20250307%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250307T102633Z&X-Amz-Expires=3600&X-Amz-Signature=c9306999a29a1376905dcdc8d9eb7a567677ec554dfb6f6ac80d80799f073fe0&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27part.0.parquet%3B%20filename%3D%22part.0.parquet%22%3B&x-id=GetObject The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 298, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 60, in _split_generators self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f)) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 2325, in read_schema file = ParquetFile( File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 318, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open File "pyarrow/error.pxi", line 88, in pyarrow.lib.check_status File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 826, in read_with_retries out = read(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 1013, in read return super().read(length) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1846, in read out = self.cache._fetch(self.loc, self.loc + length) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 189, in _fetch self.cache = self.fetcher(start, end) # new block replaces old File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 976, in _fetch_range hf_raise_for_status(r) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status raise _format(HfHubHTTPError, str(e), response) from e huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/b8/25/b825b79cba6df57c96ede47fe214933941eee569f90e25eaf1f1098e998e2a7e/a0f754eea66e74a607905495d1ab0b200a38c952ebb5c680c8400b5129d81f76?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20250307%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250307T102633Z&X-Amz-Expires=3600&X-Amz-Signature=c9306999a29a1376905dcdc8d9eb7a567677ec554dfb6f6ac80d80799f073fe0&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27part.0.parquet%3B%20filename%3D%22part.0.parquet%22%3B&x-id=GetObject <?xml version="1.0" encoding="UTF-8"?> <Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>repos/b8/25/b825b79cba6df57c96ede47fe214933941eee569f90e25eaf1f1098e998e2a7e/a0f754eea66e74a607905495d1ab0b200a38c952ebb5c680c8400b5129d81f76</Key><RequestId>RQMZSG070XVKHZ7G</RequestId><HostId>eTKrwiG7yN2lZ+gu3kK7ySRMnvK1D/CkCX7lvfG1WfE3sdE7v2fnDuTC+RmjhMxEUN2HhpwWzUI=</HostId></Error> The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 352, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 303, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Zyda-2
Zyda-2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda-2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
To construct Zyda-2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, and Dolma. Models trained on Zyda-2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda-2 outperforms all its constituent datasets in resulting model quality.
An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 series of models which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.
According to our evaluations, Zyda-2 is the most performant per-token open dataset available. Zyda-2 excels at educational and natural language reasoning content. For code performance, we recommend mixing it with a pure code dataset such as Starcoder.

For more information, please see our technical blog.
How to download
We preserved the schemas of original component datasets, meaning that every component has its own schema. For that reason attempting to download the whole dataset using datasets.load_dataset()
will fail during the stage of generating a split. If you attempt to stream the default config, it will also fail.
To download the whole dataset we recommend to either clone the repository, or, if you must use the datasets.load_dataset()
, download individual components separately.
Only nemo_id
and text
are common columns between the components. Select those for every component first, and only then interleave the datasets with optimal weights (see example at the bottom of this section).
Example command to clone the repository using huggingface-cli: huggingface-cli download Zyphra/Zyda-2 --repo-type dataset
Commands to download individual components:
- DCLM:
ds_dclm = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")
- Zyda:
ds_zyda = datasets.load_dataset("Zyphra/Zyda-2", name="zyda_crossdeduped-filtered", split="train")
- Dolma-CC:
ds_dolma = datasets.load_dataset("Zyphra/Zyda-2", name="dolma-cc_crossdeduped-filtered", split="train")
- Fineweb-Edu:
ds_fwe = datasets.load_dataset("Zyphra/Zyda-2", name="fwe3", split="train")
In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to use appropriate weights during training. We found the following optimal weights by number of tokens (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
Below you will find an example of how to get proper dataset object.
It demonstrates how to select only nemo_id
and text
columns, and then interleave the datasets with probabilities computed from the weights above.
One needs to be careful with weights normalization, as interleave_datasets()
returns documents, while our weights are token-wise. We provide precomputed document-wise weights in the example below.
To stream the dataset, add streaming=True
to the load_dataset()
commands.
common_columns = ["nemo_id", "text"]
ds_dclm = ds_dclm.select_columns(common_columns)
ds_zyda = ds_zyda.select_columns(common_columns)
ds_dolma = ds_dolma.select_columns(common_columns)
ds_fwe = ds_zyda.select_columns(common_columns)
norm_weights = [0.4038, 0.0316, 0.0585, 0.5061]
ds = datasets.interleave_datasets([ds_dclm, ds_zyda, ds_dolma, ds_fwe], probabilities=norm_weights, stopping_strategy="all_exhausted")
(Smaller) sample version
Along with the configs above, you can also download a smaller version of the dataset with the following config:
sample-100BT
: a subset randomly sampled from the whole dataset of around 100B gpt-neox tokens (252GB, 91.2M documents).
This sample only has common columns nemo-id
and text
. In addition, it was sampled according to optimal weights, so you can start using it directly.
ds_sample = datasets.load_dataset("Zyphra/Zyda-2", name="sample-100BT", split="train")
Breakdown by component
Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) |
---|---|---|---|
dclm-crossdeduped | 8,469.4 | 2,590.5 | 3,348.942 |
zyda-crossdeduped-filtered | 452.4 | 247.7 | 163.6 |
dolma_cc-crossdeduped-filtered | 668.2 | 445.6 | 238.4 |
fwe3 | 3,490.5 | 1,279.1 | 1,319.2 |
Total | 13,080.5 | 4,562.8 | 5,070.2 |
Dataset Description
- Curated by: Zyphra
- Language(s) (NLP): Primarily English
- License: Open Data Commons License
Dataset Structure
Each component has their own individual schema. Please, consult with their respective sources for exact information.
However, in all components the document text is in the text
column, and the unique document id is in the nemo_id
column.
Our Zyda-1 and Dolma-CC versions also have two additional columns corresponding to prediction of Nvidia's quality model (https://huggingface.co/nvidia/quality-classifier-deberta): quality_prob
and quality_pred
.
Source Data
Zyda-2 is comprised of four high quality open-source datasets:
Zyda-1: https://huggingface.co/datasets/Zyphra/Zyda
Dolma-CC v1.7: https://huggingface.co/datasets/allenai/dolma
DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2

Personal and Sensitive Information
As a language modeling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
Bias, Risks, and Limitations
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
Licensing Information
We are releasing this dataset under the terms of ODC-BY. By using this dataset, you are also bound by any license agreements and terms of use of the original data sources.
Citation
If you use our dataset to train a model, please cite us at:
@misc{zyphra_nvidia_2024,
author = {Yury Tokpanov, Paolo Glorioso, Ayush Dattagupta, Vibhu Jawa, Ryan Wolf, Vikranth Jeyakumar, Arham Mehta, Quentin Anthony, Beren Millidge},
title = {Building {Zyda-2}, a 5 {Trillion} {Token} {High-Quality} {Dataset}, with {NVIDIA} {NeMo} {Curator}},
url = {https://www.zyphra.com/post/building-zyda-2},
publisher = {Zyphra},
year = {2024},
month = {October},
day = {15}
}
- Downloads last month
- 62