What do the last 64 tokens in the PaliGemma tokenizer represent?
Hi,
I found that the PaliGemma weights include 257,216 embeddings. However, based on the official blog post (https://developers.googleblog.com/en/gemma-explained-paligemma-architecture/), this seems to be an incorrect value. The calculation shown there—256,000 (base vocab) + 1,024 (location tokens) + 128 (segmentation tokens)—totals 257,152, not 257,216(shown in the blog). The correct number should likely be 257,512.
Furthermore, when I load the tokenizer from Hugging Face, it reports a vocabulary size of 257,152.
This raises the question: What do the remaining 64 embeddings (i.e., 257,216 - 257,152) represent?
Best,
Yicheng
there's no way google now simply ai generates responses to questions😭😭
I have manually looked into this. Looks like the tokenizer is also only capable of encoding text up to 257152. It's possible that the tensor was expanded to a specific value for performance considerations however this does not look like the case since 257152 is a multiple of 128 while the resized one is a multiple of 64, and TPUs are known to function best at multiples of 128, so the second most likely thing is they were planned for something specific and then scrapped but the embedding layer was not resized down