pietrolesci commited on
Commit
83bcdf0
·
verified ·
1 Parent(s): 83c17e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -7,9 +7,11 @@ size_categories:
7
  - 100B<n<1T
8
  ---
9
 
10
- This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models. You can find these models under the EleutherAI organization, and they are also listed in my [Memorisation Profiles collection](https://huggingface.co/collections/pietrolesci/memorisation-profiles-6619604c4594c878cd9d451f).
 
11
 
12
- This data is the same as the one found in [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled), but it is presented in a more manageable format. Instead of using the Megatron format used by the GPT-NeoX library, I have stored the data in a parquet format.
 
13
 
14
 
15
  ## Format
@@ -29,7 +31,8 @@ Let's clarify the mapping between chunks and checkpoints with an example.
29
  Batches with `batch_idx` >= 1000 are only seen by later checkpoints.
30
 
31
 
32
- *NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512). I did not create a separate file for these, but you can easily subset the first chunk (i.e., `data/train-001000.parquet`).
 
33
 
34
 
35
  ## License
@@ -45,7 +48,8 @@ Kudos to [LLM360/AmberDatasets](https://huggingface.co/datasets/LLM360/AmberData
45
  ## Interacting with the data
46
 
47
  Besides clarity and ease of use, another great advantage of this release is that it allows users to easily interact with the data without downloading it.
48
- The parquet format plays nicely with the Hugging Face Hub. So you can use its integrations with external tools like [DuckDB](https://huggingface.co/docs/hub/en/datasets-duckdb) or [pola-rs](https://huggingface.co/docs/hub/en/datasets-polars) to run queries over the data.
 
49
  For example,
50
 
51
  ```python
 
7
  - 100B<n<1T
8
  ---
9
 
10
+ This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models.
11
+ You can find these models under the EleutherAI organisation, and they are also listed in my [Memorisation-Profiles collection](https://huggingface.co/collections/pietrolesci/memorisation-profiles-6619604c4594c878cd9d451f).
12
 
13
+ This data is the same as the one found in [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled),
14
+ but it is presented in a more manageable format. Instead of using the Megatron format used by the GPT-NeoX library, I have stored the data in a parquet format.
15
 
16
 
17
  ## Format
 
31
  Batches with `batch_idx` >= 1000 are only seen by later checkpoints.
32
 
33
 
34
+ *NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512).
35
+ I did not create a separate file for these, but you can easily subset the first chunk (i.e., `data/train-001000.parquet`).
36
 
37
 
38
  ## License
 
48
  ## Interacting with the data
49
 
50
  Besides clarity and ease of use, another great advantage of this release is that it allows users to easily interact with the data without downloading it.
51
+ The parquet format plays nicely with the Hugging Face Hub.
52
+ So, you can use its integrations with external tools like [DuckDB](https://huggingface.co/docs/hub/en/datasets-duckdb) or [pola-rs](https://huggingface.co/docs/hub/en/datasets-polars) to run queries over the data.
53
  For example,
54
 
55
  ```python