Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -7,9 +7,11 @@ size_categories:
|
|
7 |
- 100B<n<1T
|
8 |
---
|
9 |
|
10 |
-
This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models.
|
|
|
11 |
|
12 |
-
This data is the same as the one found in [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled),
|
|
|
13 |
|
14 |
|
15 |
## Format
|
@@ -29,7 +31,8 @@ Let's clarify the mapping between chunks and checkpoints with an example.
|
|
29 |
Batches with `batch_idx` >= 1000 are only seen by later checkpoints.
|
30 |
|
31 |
|
32 |
-
*NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512).
|
|
|
33 |
|
34 |
|
35 |
## License
|
@@ -45,7 +48,8 @@ Kudos to [LLM360/AmberDatasets](https://huggingface.co/datasets/LLM360/AmberData
|
|
45 |
## Interacting with the data
|
46 |
|
47 |
Besides clarity and ease of use, another great advantage of this release is that it allows users to easily interact with the data without downloading it.
|
48 |
-
The parquet format plays nicely with the Hugging Face Hub.
|
|
|
49 |
For example,
|
50 |
|
51 |
```python
|
|
|
7 |
- 100B<n<1T
|
8 |
---
|
9 |
|
10 |
+
This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models.
|
11 |
+
You can find these models under the EleutherAI organisation, and they are also listed in my [Memorisation-Profiles collection](https://huggingface.co/collections/pietrolesci/memorisation-profiles-6619604c4594c878cd9d451f).
|
12 |
|
13 |
+
This data is the same as the one found in [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled),
|
14 |
+
but it is presented in a more manageable format. Instead of using the Megatron format used by the GPT-NeoX library, I have stored the data in a parquet format.
|
15 |
|
16 |
|
17 |
## Format
|
|
|
31 |
Batches with `batch_idx` >= 1000 are only seen by later checkpoints.
|
32 |
|
33 |
|
34 |
+
*NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512).
|
35 |
+
I did not create a separate file for these, but you can easily subset the first chunk (i.e., `data/train-001000.parquet`).
|
36 |
|
37 |
|
38 |
## License
|
|
|
48 |
## Interacting with the data
|
49 |
|
50 |
Besides clarity and ease of use, another great advantage of this release is that it allows users to easily interact with the data without downloading it.
|
51 |
+
The parquet format plays nicely with the Hugging Face Hub.
|
52 |
+
So, you can use its integrations with external tools like [DuckDB](https://huggingface.co/docs/hub/en/datasets-duckdb) or [pola-rs](https://huggingface.co/docs/hub/en/datasets-polars) to run queries over the data.
|
53 |
For example,
|
54 |
|
55 |
```python
|