English
text generation

BPE Tokenizer for TinyStoriesV2

Based on get-neo BPE Tokenizer, but with a smaller vocabulary. Trained with TinyStoriesV2.

  • Vocab Size: 2048
  • 256 Base chars
  • 1 extra Token: <|endoftext|>
  • 1791 merges
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Dataset used to train fhswf/BPE_GPT2_TinyStoriesV2_cleaned_2048