ibrim commited on
Commit
6534940
Β·
verified Β·
1 Parent(s): b347aa0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -150
README.md CHANGED
@@ -1,150 +1 @@
1
- # minbpe
2
-
3
- Minimal, clean code for the (byte-level) Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization. The BPE algorithm is "byte-level" because it runs on UTF-8 encoded strings.
4
-
5
- This algorithm was popularized for LLMs by the [GPT-2 paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and the associated GPT-2 [code release](https://github.com/openai/gpt-2) from OpenAI. [Sennrich et al. 2015](https://arxiv.org/abs/1508.07909) is cited as the original reference for the use of BPE in NLP applications. Today, all modern LLMs (e.g. GPT, Llama, Mistral) use this algorithm to train their tokenizers.
6
-
7
- There are two Tokenizers in this repository, both of which can perform the 3 primary functions of a Tokenizer: 1) train the tokenizer vocabulary and merges on a given text, 2) encode from text to tokens, 3) decode from tokens to text. The files of the repo are as follows:
8
-
9
- 1. [minbpe/base.py](minbpe/base.py): Implements the `Tokenizer` class, which is the base class. It contains the `train`, `encode`, and `decode` stubs, save/load functionality, and there are also a few common utility functions. This class is not meant to be used directly, but rather to be inherited from.
10
- 2. [minbpe/basic.py](minbpe/basic.py): Implements the `BasicTokenizer`, the simplest implementation of the BPE algorithm that runs directly on text.
11
- 3. [minbpe/regex.py](minbpe/regex.py): Implements the `RegexTokenizer` that further splits the input text by a regex pattern, which is a preprocessing stage that splits up the input text by categories (think: letters, numbers, punctuation) before tokenization. This ensures that no merges will happen across category boundaries. This was introduced in the GPT-2 paper and continues to be in use as of GPT-4. This class also handles special tokens, if any.
12
- 4. [minbpe/gpt4.py](minbpe/gpt4.py): Implements the `GPT4Tokenizer`. This class is a light wrapper around the `RegexTokenizer` (2, above) that exactly reproduces the tokenization of GPT-4 in the [tiktoken](https://github.com/openai/tiktoken) library. The wrapping handles some details around recovering the exact merges in the tokenizer, and the handling of some unfortunate (and likely historical?) 1-byte token permutations.
13
-
14
- Finally, the script [train.py](train.py) trains the two major tokenizers on the input text [tests/taylorswift.txt](tests/taylorswift.txt) (this is the Wikipedia entry for her kek) and saves the vocab to disk for visualization. This script runs in about 25 seconds on my (M1) MacBook.
15
-
16
- All of the files above are very short and thoroughly commented, and also contain a usage example on the bottom of the file.
17
-
18
- ## quick start
19
-
20
- As the simplest example, we can reproduce the [Wikipedia article on BPE](https://en.wikipedia.org/wiki/Byte_pair_encoding) as follows:
21
-
22
- ```python
23
- from minbpe import BasicTokenizer
24
- tokenizer = BasicTokenizer()
25
- text = "aaabdaaabac"
26
- tokenizer.train(text, 256 + 3) # 256 are the byte tokens, then do 3 merges
27
- print(tokenizer.encode(text))
28
- # [258, 100, 258, 97, 99]
29
- print(tokenizer.decode([258, 100, 258, 97, 99]))
30
- # aaabdaaabac
31
- tokenizer.save("toy")
32
- # writes two files: toy.model (for loading) and toy.vocab (for viewing)
33
- ```
34
-
35
- According to Wikipedia, running bpe on the input string: "aaabdaaabac" for 3 merges results in the string: "XdXac" where X=ZY, Y=ab, and Z=aa. The tricky thing to note is that minbpe always allocates the 256 individual bytes as tokens, and then merges bytes as needed from there. So for us a=97, b=98, c=99, d=100 (their [ASCII](https://www.asciitable.com) values). Then when (a,a) is merged to Z, Z will become 256. Likewise Y will become 257 and X 258. So we start with the 256 bytes, and do 3 merges to get to the result above, with the expected output of [258, 100, 258, 97, 99].
36
-
37
- ## inference: GPT-4 comparison
38
-
39
- We can verify that the `RegexTokenizer` has feature parity with the GPT-4 tokenizer from [tiktoken](https://github.com/openai/tiktoken) as follows:
40
-
41
- ```python
42
- text = "hello123!!!? (μ•ˆλ…•ν•˜μ„Έμš”!) πŸ˜‰"
43
-
44
- # tiktoken
45
- import tiktoken
46
- enc = tiktoken.get_encoding("cl100k_base")
47
- print(enc.encode(text))
48
- # [15339, 4513, 12340, 30, 320, 31495, 230, 75265, 243, 92245, 16715, 57037]
49
-
50
- # ours
51
- from minbpe import GPT4Tokenizer
52
- tokenizer = GPT4Tokenizer()
53
- print(tokenizer.encode(text))
54
- # [15339, 4513, 12340, 30, 320, 31495, 230, 75265, 243, 92245, 16715, 57037]
55
- ```
56
-
57
- (you'll have to `pip install tiktoken` to run). Under the hood, the `GPT4Tokenizer` is just a light wrapper around `RegexTokenizer`, passing in the merges and the special tokens of GPT-4. We can also ensure the special tokens are handled correctly:
58
-
59
- ```python
60
- text = "<|endoftext|>hello world"
61
-
62
- # tiktoken
63
- import tiktoken
64
- enc = tiktoken.get_encoding("cl100k_base")
65
- print(enc.encode(text, allowed_special="all"))
66
- # [100257, 15339, 1917]
67
-
68
- # ours
69
- from minbpe import GPT4Tokenizer
70
- tokenizer = GPT4Tokenizer()
71
- print(tokenizer.encode(text, allowed_special="all"))
72
- # [100257, 15339, 1917]
73
- ```
74
-
75
- Note that just like tiktoken, we have to explicitly declare our intent to use and parse special tokens in the call to encode. Otherwise this can become a major footgun, unintentionally tokenizing attacker-controlled data (e.g. user prompts) with special tokens. The `allowed_special` parameter can be set to "all", "none", or a list of special tokens to allow.
76
-
77
- ## training
78
-
79
- Unlike tiktoken, this code allows you to train your own tokenizer. In principle and to my knowledge, if you train the `RegexTokenizer` on a large dataset with a vocabulary size of 100K, you would reproduce the GPT-4 tokenizer.
80
-
81
- There are two paths you can follow. First, you can decide that you don't want the complexity of splitting and preprocessing text with regex patterns, and you also don't care for special tokens. In that case, reach for the `BasicTokenizer`. You can train it, and then encode and decode for example as follows:
82
-
83
- ```python
84
- from minbpe import BasicTokenizer
85
- tokenizer = BasicTokenizer()
86
- tokenizer.train(very_long_training_string, vocab_size=4096)
87
- tokenizer.encode("hello world") # string -> tokens
88
- tokenizer.decode([1000, 2000, 3000]) # tokens -> string
89
- tokenizer.save("mymodel") # writes mymodel.model and mymodel.vocab
90
- tokenizer.load("mymodel.model") # loads the model back, the vocab is just for vis
91
- ```
92
-
93
- If you instead want to follow along with OpenAI did for their text tokenizer, it's a good idea to adopt their approach of using regex pattern to split the text by categories. The GPT-4 pattern is a default with the `RegexTokenizer`, so you'd simple do something like:
94
-
95
- ```python
96
- from minbpe import RegexTokenizer
97
- tokenizer = RegexTokenizer()
98
- tokenizer.train(very_long_training_string, vocab_size=32768)
99
- tokenizer.encode("hello world") # string -> tokens
100
- tokenizer.decode([1000, 2000, 3000]) # tokens -> string
101
- tokenizer.save("tok32k") # writes tok32k.model and tok32k.vocab
102
- tokenizer.load("tok32k.model") # loads the model back from disk
103
- ```
104
-
105
- Where, of course, you'd want to change around the vocabulary size depending on the size of your dataset.
106
-
107
- **Special tokens**. Finally, you might wish to add special tokens to your tokenizer. Register these using the `register_special_tokens` function. For example if you train with vocab_size of 32768, then the first 256 tokens are raw byte tokens, the next 32768-256 are merge tokens, and after those you can add the special tokens. The last "real" merge token will have id of 32767 (vocab_size - 1), so your first special token should come right after that, with an id of exactly 32768. So:
108
-
109
- ```python
110
- from minbpe import RegexTokenizer
111
- tokenizer = RegexTokenizer()
112
- tokenizer.train(very_long_training_string, vocab_size=32768)
113
- tokenizer.register_special_tokens({"<|endoftext|>": 32768})
114
- tokenizer.encode("<|endoftext|>hello world", allowed_special="all")
115
- ```
116
-
117
- You can of course add more tokens after that as well, as you like. Finally, I'd like to stress that I tried hard to keep the code itself clean, readable and hackable. You should not have feel scared to read the code and understand how it works. The tests are also a nice place to look for more usage examples. That reminds me:
118
-
119
- ## tests
120
-
121
- We use the pytest library for tests. All of them are located in the `tests/` directory. First `pip install pytest` if you haven't already, then:
122
-
123
- ```bash
124
- $ pytest -v .
125
- ```
126
-
127
- to run the tests. (-v is verbose, slightly prettier).
128
-
129
- ## community extensions
130
-
131
- * [gnp/minbpe-rs](https://github.com/gnp/minbpe-rs): A Rust implementation of `minbpe` providing (near) one-to-one correspondence with the Python version
132
-
133
- ## exercise
134
-
135
- For those trying to study BPE, here is the advised progression exercise for how you can build your own minbpe step by step. See [exercise.md](exercise.md).
136
-
137
- ## lecture
138
-
139
- I built the code in this repository in this [YouTube video](https://www.youtube.com/watch?v=zduSFxRajkE). You can also find this lecture in text form in [lecture.md](lecture.md).
140
-
141
- ## todos
142
-
143
- - write a more optimized Python version that could run over large files and big vocabs
144
- - write an even more optimized C or Rust version (think through)
145
- - rename GPT4Tokenizer to GPTTokenizer and support GPT-2/GPT-3/GPT-3.5 as well?
146
- - write a LlamaTokenizer similar to GPT4Tokenizer (i.e. attempt sentencepiece equivalent)
147
-
148
- ## License
149
-
150
- MIT
 
1
+ Encoding decoding tokenizer model