pietrolesci's picture
Update README.md
1b0c508 verified
metadata
task_categories:
  - text-generation
language:
  - en
size_categories:
  - 100B<n<1T
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*.parquet
  - config_name: detokenized
    data_files:
      - split: train
        path: detokenized/*.parquet

This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models. You can find these models under the EleutherAI organisation, and they are also listed in my Memorisation-Profiles collection.

This data is the same as the one found in EleutherAI/pile-deduped-pythia-preshuffled, but it is presented in a more manageable format. Instead of using the Megatron format used by the GPT-NeoX library, I have stored the data in a parquet format.

Format

/data

The /data folder contains the original tokenised, packed, and preshuffled data.

The dataset has 3 columns:

  • uid: a sequential identified for the sequence (not present in the original dataset)
  • batch_idx: the index of the batch to which a sequence belongs (not present in the original dataset).
  • token_ids: the tokenised texts, each of length 2049 tokens.

The dataset is split into 143 chunks (parquet files) and each chunk includes 1024000 sequences (rows) corresponding to 1000 batches, each formed by 1024 sequences. This means that each chunk corresponds to the data seen between one checkpoint and the next. Specifically, the Pythia model checkpoints are available* at initialisation (step 0) and each 1000 steps (steps 1000, 2000, etc) up to the last checkpoint (step 143000). We reflect this structure into the filenames: train-001000.parquet, train-002000.parquet, ..., train-143000.parquet. Let's clarify the mapping between chunks and checkpoints with an example.

Example: Consider file train-001000.parquet. It contains sequences with batch_idx in [0, 999]. These sequences were "seen" by checkpoint 1000. Batches with batch_idx >= 1000 are only seen by later checkpoints.

*NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512). I did not create a separate file for these, but you can easily subset the first chunk (i.e., data/train-001000.parquet).

idxmaps-npy

These are the files that are used to load the packed, tokenised, and preshuffled data starting from the EleutherAI/pythia_deduped_pile_idxmaps using the GPT2Dataset class implemented in the GPT-NeoX library.

You can read this numpy file as follows:

idx_file = np.load(<path_to_npy>, allow_pickle=True, mmap_mode="r")

Note: the dataset available under the /data folder is basically what you would get by combining the pythia_deduped_pile_idxmaps and these files.

License

For the license, refer to the original dataset (EleutherAI/pile-deduped-pythia-preshuffled).

Acknowledgements

Kudos to LLM360/AmberDatasets, which inspired this release.

Interacting with the data

Besides clarity and ease of use, another great advantage of this release is that it allows users to easily interact with the data without downloading it. The parquet format plays nicely with the Hugging Face Hub. So, you can use its integrations with external tools like DuckDB or pola-rs to run queries over the data. For example,

import duckdb as db

df = db.sql("""
SELECT batch_idx, count(1) as count 
FROM 'hf://datasets/pietrolesci/pile-deduped-pythia-preshuffled/data/*.parquet' 
GROUP BY batch_idx
""").df()