--- task_categories: - text-generation language: - en size_categories: - 100B= 1000 are only seen by later checkpoints. *NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512). I did not create a separate file for these, but you can easily subset the first chunk (i.e., `data/train-001000.parquet`). ### `idxmaps-npy` These are the files that are used to load the packed, tokenised, and preshuffled data starting from the [`EleutherAI/pythia_deduped_pile_idxmaps`](https://huggingface.co/datasets/EleutherAI/pythia_deduped_pile_idxmaps) using the [GPT2Dataset](https://github.com/EleutherAI/gpt-neox/blob/71df4d5017f9f4919566a11454fe3a507ffdc632/megatron/data/gpt2_dataset.py#L29) class implemented in the GPT-NeoX library. You can read this numpy file as follows: ```python idx_file = np.load(, allow_pickle=True, mmap_mode="r") ``` Note: the dataset available under the `/data` folder is basically what you would get by combining the `pythia_deduped_pile_idxmaps` and these files. ## License For the license, refer to the original dataset ([EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled)). ## Acknowledgements Kudos to [LLM360/AmberDatasets](https://huggingface.co/datasets/LLM360/AmberDatasets), which inspired this release. ## Interacting with the data Besides clarity and ease of use, another great advantage of this release is that it allows users to easily interact with the data without downloading it. The parquet format plays nicely with the Hugging Face Hub. So, you can use its integrations with external tools like [DuckDB](https://huggingface.co/docs/hub/en/datasets-duckdb) or [pola-rs](https://huggingface.co/docs/hub/en/datasets-polars) to run queries over the data. For example, ```python import duckdb as db df = db.sql(""" SELECT batch_idx, count(1) as count FROM 'hf://datasets/pietrolesci/pile-deduped-pythia-preshuffled/data/*.parquet' GROUP BY batch_idx """).df() ```