Datasets:
pretty_name: edu_fineweb10B_sharded_50shards
language:
- en
tags:
- language-modeling
- tokenized
- tiktoken
- npy
- gpt2
- sharded
- autoregressive
task_categories:
- text-generation
size_categories:
- 10B+
license: mit
Dataset Card for edu_fineweb10B_sharded_50shards
This dataset card aims to describe the edu_fineweb10B_sharded_50shards
dataset, a large-scale pre-tokenized and sharded dataset created from the eduFineWeb
corpus. It has been prepared for use in training transformer-based language models using NumPy arrays for efficient loading.
Dataset Details
Dataset Description
edu_fineweb10B_sharded_50shards
is a tokenized dataset based on the eduFineWeb
10B corpus, designed for scalable training of language models. The dataset contains a total of 10 billion tokens split across 50 shards—49 for training and 1 for evaluation.
Each shard is stored in .npy
format and contains 200 million token IDs, pre-tokenized using the tiktoken
tokenizer (compatible with GPT-2-style models). The dataset is designed for high-efficiency training pipelines, particularly in distributed or multi-GPU setups.
- Curated by: Private (individual researcher)
- Funded by [optional]: Not funded
- Shared by [optional]: Abhinav
- Language(s) (NLP): English
- License: MIT
Dataset Sources
- Repository: [Private / Local Project]
- Paper [optional]: N/A
- Demo [optional]: N/A
Uses
Direct Use
The dataset is suitable for:
- Pretraining large autoregressive language models (e.g., GPT-style)
- Finetuning models for general-purpose language generation
- Research in scaling laws, memory-efficient training, and model evaluation
Out-of-Scope Use
- Not suited for supervised learning tasks without additional labeling
- Not appropriate for real-time applications without additional safety filtering
- Not recommended for use in sensitive, safety-critical environments without thorough auditing
Dataset Structure
- Format:
.npy
files - Tokenizer used:
tiktoken.get_encoding("gpt2")
- Tokens per shard: 200 million
- Total shards: 50
- Shard naming format:
- Training:
edu_fineweb_train_000000.npy
toedu_fineweb_train_000048.npy
- Evaluation:
edu_fineweb_val_000000.npy
- Training:
- File size:
400MB per shard (20GB total)
Each file contains a flat NumPy array of token IDs (int32
), which can be converted to PyTorch tensors for training.
Example Usage
import numpy as np
import torch
# Load a shard file
filename = "edu_fineweb_train_000001.npy"
npt = np.load(filename)
npt = npt.astype(np.int32)
ptt = torch.tensor(npt, dtype=torch.long)
print(ptt.shape) # Should be (200_000_000,)
print(ptt[:10]) # View first 10 token IDs