pietrolesci commited on
Commit
7222701
·
verified ·
1 Parent(s): 3cfc8c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -5,4 +5,34 @@ language:
5
  - en
6
  size_categories:
7
  - 100B<n<1T
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - en
6
  size_categories:
7
  - 100B<n<1T
8
+ ---
9
+
10
+ This dataset contains the fully prepared data (tokenised and pre-shuffled) used to train the Pythia (deduped) models—which you can find under the EleutherAI organisation and also listed in my [Memerisation-Profiles collection](https://huggingface.co/collections/pietrolesci/memorisation-profiles-6619604c4594c878cd9d451f).
11
+ It is the very same data as [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled) in a more manageable format.
12
+ That is, instead of storing the data in the Megatron format used by the GPT-NeoX library, I stored them in a parquet format.
13
+
14
+
15
+ ## Format
16
+
17
+ The dataset has 3 columns:
18
+ - `uid`: a sequential identified for the sequence (not present in the original dataset)
19
+ - `batch_idx`: the index of the batch to which a sequence belongs to (not present in the original dataset).
20
+ - `token_ids`: the tokenised texts, each of length 2049 tokens.
21
+
22
+ The dataset is split into 143 chunks (i.e., parquet files), each corresponding to the data seen by a checkpoint.
23
+ The Pythia model checkpoints are available at initialisation (i.e., step 0) and each 1000 steps (i.e., steps 1000, 2000, etc) up to the last checkpoint (i.e., step 143000).
24
+ We reflect this into the filenames: `train-001000.parquet`, `train-002000.parquet`, ..., `train-143000.parquet`.
25
+
26
+ **Example**: consider file `train-001000.parquet`. It contains sequences with `batch_idx` 0-999. These sequences where "seen" (i.e., the model took a gradient step on them) before taking checkpoint 1000.
27
+
28
+ Note that additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512). I did not create a separate file for these, but you can easily subset the first chunk (i.e., `data/train-001000.parquet`).
29
+
30
+
31
+ ## License
32
+
33
+ For the license, refer to the original dataset ([EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled)).
34
+
35
+
36
+ ## Acknowledgements
37
+
38
+ Kudos to [LLM360/AmberDatasets](https://huggingface.co/datasets/LLM360/AmberDatasets), which inspired this release.