pile-deduped / README.md
pietrolesci's picture
Update README.md
2438072 verified
metadata
task_categories:
  - text-generation
language:
  - en
size_categories:
  - 100M<n<1B
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*.parquet
  - config_name: tokenized
    data_files:
      - split: train
        path: tokenized/*.parquet
  - config_name: sequence-tracking
    data_files:
      - split: train
        path: sequence-tracking/*.parquet
  - config_name: stats
    data_files:
      - split: doc_len
        path: stats/doc_len.parquet

Repo Structure

Each file contains 1M documents (apart from the last file, which contains the remaining documents). Each file is around 2GB in size (slight differences are due to certain documents being longer or shorter than the "average" across files). Each document has a unique id assigned (a simply sequential int).

  • /data: The raw documents. This config is the same as EleutherAI/the_pile_deduplicated. One minor point is that, instead of copying those data, I detokenised the data in /tokenized (see below). Defining the original data as the detokenised data prevents any inconsistency in future analysis. Apart from minor difference (e.g., 0.1% of the data in my tests), such as angle brackets being detokenised to a different Unicode, the original data and the detokenised data are exactly the same and they get tokenised exactly in the same way. I added a column called num_chars which simply reports the number of characters per document.

  • /tokenized: Include the data available in EleutherAI/pythia_deduped_pile_idxmaps. I added a column called num_tokens which simply reports the number of tokens in the tokenised document.