Datasets:
File size: 4,163 Bytes
3257701 1b0c508 7222701 83bcdf0 1483d4a 83bcdf0 7222701 422beb1 7222701 83c17e0 7222701 bcb6226 83c17e0 bcb6226 7222701 bcb6226 7222701 bcb6226 83bcdf0 7222701 422beb1 7222701 2ad21fb 83c17e0 83bcdf0 2ad21fb f325743 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
task_categories:
- text-generation
language:
- en
size_categories:
- 100B<n<1T
configs:
- config_name: default
data_files:
- split: train
path: data/*.parquet
- config_name: detokenized
data_files:
- split: train
path: detokenized/*.parquet
---
This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models.
You can find these models under the EleutherAI organisation, and they are also listed in my [Memorisation-Profiles collection](https://huggingface.co/collections/pietrolesci/memorisation-profiles-6619604c4594c878cd9d451f).
This data is the same as the one found in [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled),
but it is presented in a more manageable format. Instead of using the Megatron format used by the GPT-NeoX library, I have stored the data in a parquet format.
## Format
### `/data`
The `/data` folder contains the original tokenised, packed, and preshuffled data.
The dataset has 3 columns:
- `uid`: a sequential identified for the sequence (not present in the original dataset)
- `batch_idx`: the index of the batch to which a sequence belongs (not present in the original dataset).
- `token_ids`: the tokenised texts, each of length 2049 tokens.
The dataset is split into 143 chunks (parquet files) and each chunk includes 1024000 sequences (rows) corresponding to 1000 batches, each formed by 1024 sequences.
This means that each chunk corresponds to the data seen between one checkpoint and the next.
Specifically, the Pythia model checkpoints are available* at initialisation (step 0) and each 1000 steps (steps 1000, 2000, etc) up to the last checkpoint (step 143000).
We reflect this structure into the filenames: `train-001000.parquet`, `train-002000.parquet`, ..., `train-143000.parquet`.
Let's clarify the mapping between chunks and checkpoints with an example.
**Example**: Consider file `train-001000.parquet`. It contains sequences with `batch_idx` in [0, 999]. These sequences were "seen" by checkpoint 1000.
Batches with `batch_idx` >= 1000 are only seen by later checkpoints.
*NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512).
I did not create a separate file for these, but you can easily subset the first chunk (i.e., `data/train-001000.parquet`).
### `idxmaps-npy`
These are the files that are used to load the packed, tokenised, and preshuffled data starting
from the [`EleutherAI/pythia_deduped_pile_idxmaps`](https://huggingface.co/datasets/EleutherAI/pythia_deduped_pile_idxmaps)
using the [GPT2Dataset](https://github.com/EleutherAI/gpt-neox/blob/71df4d5017f9f4919566a11454fe3a507ffdc632/megatron/data/gpt2_dataset.py#L29)
class implemented in the GPT-NeoX library.
You can read this numpy file as follows:
```python
idx_file = np.load(<path_to_npy>, allow_pickle=True, mmap_mode="r")
```
Note: the dataset available under the `/data` folder is basically what you would get by combining the `pythia_deduped_pile_idxmaps`
and these files.
## License
For the license, refer to the original dataset ([EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled)).
## Acknowledgements
Kudos to [LLM360/AmberDatasets](https://huggingface.co/datasets/LLM360/AmberDatasets), which inspired this release.
## Interacting with the data
Besides clarity and ease of use, another great advantage of this release is that it allows users to easily interact with the data without downloading it.
The parquet format plays nicely with the Hugging Face Hub.
So, you can use its integrations with external tools like [DuckDB](https://huggingface.co/docs/hub/en/datasets-duckdb) or [pola-rs](https://huggingface.co/docs/hub/en/datasets-polars) to run queries over the data.
For example,
```python
import duckdb as db
df = db.sql("""
SELECT batch_idx, count(1) as count
FROM 'hf://datasets/pietrolesci/pile-deduped-pythia-preshuffled/data/*.parquet'
GROUP BY batch_idx
""").df()
``` |