SuperBPE
Collection
SuperBPE tokenizers and models trained with them
•
8 items
•
Updated
•
14
This subset contains documents sampled uniformly at random from allenai/olmo-mix-1124. The longest 1% of documents in the subset are then truncated to the 99th percentile in document lengths, due to some extreme outliers in document length. The train
split is used for tokenizer training, and the eval
split is used for evaluating tokenizer encoding efficiency.