bigmoyan commited on
Commit
d1e2b19
·
verified ·
1 Parent(s): 37a95d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -31,6 +31,12 @@ library_name: transformers
31
  <b>📰&nbsp;&nbsp;<a href="https://moonshotai.github.io/Kimi-K2/">Tech Blog</a></b> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <b>📄&nbsp;&nbsp;Paper Link (coming soon)</b>
32
  </p>
33
 
 
 
 
 
 
 
34
  ## 1. Model Introduction
35
 
36
  Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.
 
31
  <b>📰&nbsp;&nbsp;<a href="https://moonshotai.github.io/Kimi-K2/">Tech Blog</a></b> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <b>📄&nbsp;&nbsp;Paper Link (coming soon)</b>
32
  </p>
33
 
34
+ ## 0. Changelog
35
+
36
+ ### 2025.7.15
37
+ - We have updated our tokenizer implementation. Now special tokens like `[EOS]` can be encoded to their token ids.
38
+ - We fixed a bug in the chat template that was breaking multi-turn tool calls.
39
+
40
  ## 1. Model Introduction
41
 
42
  Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.