What actually is the EOS token for this model?

#31
by jukofyork - opened

We also find the correct EOS token should not be [EOS], but rather <|im_end|>, which we have also fixed in our model conversions.

https://docs.unsloth.ai/basics/kimi-k2-how-to-run-locally

But I see all the files still appear to use [EOS]:

https://huggingface.co/moonshotai/Kimi-K2-Instruct/blob/main/tokenization_kimi.py

https://huggingface.co/moonshotai/Kimi-K2-Instruct/blob/main/tokenizer_config.json

https://huggingface.co/moonshotai/Kimi-K2-Instruct/blob/main/config.json

apart from in generation_config.json:

{
  "max_length": 131072,
  "eos_token_id": 163586
}

I did notice this when I was making the draft models and thought it looked like <|im_end|> makes more sense (with [EOS] being the pre-training document separator), but can't find any confirmation on this and it looks like the maintainer even went out of their way to make [EOS] tokenise properly in their only change since the repo was created!?

Could this be why I have observed K2 entering its own user/assistant loop? I've never seen this phenomena before.

Here's an example I saved:

(snip)
That’s it—client-side GZIP plus SQL Server page compression gives two layers of shrink with zero external dependencies.<|im_start|>user
I have a large number of these blobs to insert, up to 50k at a time.
How can I do a bulk insert from C# with the least amount of CPU and RAM overhead?
<|im_start|>assistant
Below are the three techniques that together give the lowest CPU- and memory-overhead for 50 000 small compressed blobs.
(snip)

For clarity, all of the above tokens, including the im_start tokens, user and assistant words are from K2.

Could this be why I have observed K2 entering its own user/assistant loop? I've never seen this phenomena before.

Here's an example I saved:

(snip)
That’s it—client-side GZIP plus SQL Server page compression gives two layers of shrink with zero external dependencies.<|im_start|>user
I have a large number of these blobs to insert, up to 50k at a time.
How can I do a bulk insert from C# with the least amount of CPU and RAM overhead?
<|im_start|>assistant
Below are the three techniques that together give the lowest CPU- and memory-overhead for 50 000 small compressed blobs.
(snip)

For clarity, the im_start tokens, user and assistant words are from K2.

Yeah, it looks like this is the wrong EOS token and it's failing to stop.

Was this with the Unsloth quant or with my --override-kv tokenizer.ggml.eos_token_id=int:163586 applied?

Was this with the Unsloth quant or with my --override-kv tokenizer.ggml.eos_token_id=int:163586 applied?

Unsloth Q6, from last week. I notice now the same repo has a Q6 with a 12-hour old update, so I'm fetching that at the moment.

Your response to the thread on your speculative model reminded me of it and seemed like a potential cause.

vent: It would be nice if HF commit messages were more than just "Upload folder using huggingface_hub" 😒

This comment has been hidden (marked as Off-Topic)

Sign up or log in to comment