pietrolesci commited on
Commit
83c17e0
·
verified ·
1 Parent(s): aba19e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -7,7 +7,7 @@ size_categories:
7
  - 100B<n<1T
8
  ---
9
 
10
- This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models. You can find these models under the EleutherAI organization, and they are also listed in my [Memorization Profiles collection](https://huggingface.co/collections/pietrolesci/memorisation-profiles-6619604c4594c878cd9d451f).
11
 
12
  This data is the same as the one found in [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled), but it is presented in a more manageable format. Instead of using the Megatron format used by the GPT-NeoX library, I have stored the data in a parquet format.
13
 
@@ -16,11 +16,11 @@ This data is the same as the one found in [EleutherAI/pile-deduped-pythia-preshu
16
 
17
  The dataset has 3 columns:
18
  - `uid`: a sequential identified for the sequence (not present in the original dataset)
19
- - `batch_idx`: the index of the batch to which a sequence belongs to (not present in the original dataset).
20
  - `token_ids`: the tokenised texts, each of length 2049 tokens.
21
 
22
  The dataset is split into 143 chunks (parquet files) and each chunk includes 1024000 sequences (rows) corresponding to 1000 batches, each formed by 1024 sequences.
23
- This means that each chunk corresponds to the data seen between between one checkpoint and the next.
24
  Specifically, the Pythia model checkpoints are available* at initialisation (step 0) and each 1000 steps (steps 1000, 2000, etc) up to the last checkpoint (step 143000).
25
  We reflect this structure into the filenames: `train-001000.parquet`, `train-002000.parquet`, ..., `train-143000.parquet`.
26
  Let's clarify the mapping between chunks and checkpoints with an example.
@@ -44,7 +44,7 @@ Kudos to [LLM360/AmberDatasets](https://huggingface.co/datasets/LLM360/AmberData
44
 
45
  ## Interacting with the data
46
 
47
- Besides clarity and ease-of-use, another great advantage of this release is that it allows users to easily interact with the data without downloading it.
48
  The parquet format plays nicely with the Hugging Face Hub. So you can use its integrations with external tools like [DuckDB](https://huggingface.co/docs/hub/en/datasets-duckdb) or [pola-rs](https://huggingface.co/docs/hub/en/datasets-polars) to run queries over the data.
49
  For example,
50
 
 
7
  - 100B<n<1T
8
  ---
9
 
10
+ This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models. You can find these models under the EleutherAI organization, and they are also listed in my [Memorisation Profiles collection](https://huggingface.co/collections/pietrolesci/memorisation-profiles-6619604c4594c878cd9d451f).
11
 
12
  This data is the same as the one found in [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled), but it is presented in a more manageable format. Instead of using the Megatron format used by the GPT-NeoX library, I have stored the data in a parquet format.
13
 
 
16
 
17
  The dataset has 3 columns:
18
  - `uid`: a sequential identified for the sequence (not present in the original dataset)
19
+ - `batch_idx`: the index of the batch to which a sequence belongs (not present in the original dataset).
20
  - `token_ids`: the tokenised texts, each of length 2049 tokens.
21
 
22
  The dataset is split into 143 chunks (parquet files) and each chunk includes 1024000 sequences (rows) corresponding to 1000 batches, each formed by 1024 sequences.
23
+ This means that each chunk corresponds to the data seen between one checkpoint and the next.
24
  Specifically, the Pythia model checkpoints are available* at initialisation (step 0) and each 1000 steps (steps 1000, 2000, etc) up to the last checkpoint (step 143000).
25
  We reflect this structure into the filenames: `train-001000.parquet`, `train-002000.parquet`, ..., `train-143000.parquet`.
26
  Let's clarify the mapping between chunks and checkpoints with an example.
 
44
 
45
  ## Interacting with the data
46
 
47
+ Besides clarity and ease of use, another great advantage of this release is that it allows users to easily interact with the data without downloading it.
48
  The parquet format plays nicely with the Hugging Face Hub. So you can use its integrations with external tools like [DuckDB](https://huggingface.co/docs/hub/en/datasets-duckdb) or [pola-rs](https://huggingface.co/docs/hub/en/datasets-polars) to run queries over the data.
49
  For example,
50