Update README.md
Browse files
README.md
CHANGED
@@ -193,6 +193,8 @@ This dataset consists of 120 million texts (approximately 89.3B tokens) filtered
|
|
193 |
- **small_tokens**: Data composed solely of texts with 512 tokens or fewer
|
194 |
- **small_tokens_cleaned**: Data from small_tokens with Web-specific text noise removed
|
195 |
|
|
|
|
|
196 |
[For the introduction article in Japanese, click here.](https://secon.dev/entry/2025/02/20/100000-fineweb-2-edu-japanese/)
|
197 |
|
198 |
|
|
|
193 |
- **small_tokens**: Data composed solely of texts with 512 tokens or fewer
|
194 |
- **small_tokens_cleaned**: Data from small_tokens with Web-specific text noise removed
|
195 |
|
196 |
+
⚠️ WARNING: We apologize for the inconvenience. Please note that **small_tokens** and **small_tokens_cleaned** have duplicate data in the first ranges 0-9999 and 10000-19999. When using these subsets, please skip the first 10,000 items using something like `ds.select(range(10000, len(ds)))`.
|
197 |
+
|
198 |
[For the introduction article in Japanese, click here.](https://secon.dev/entry/2025/02/20/100000-fineweb-2-edu-japanese/)
|
199 |
|
200 |
|