luisra commited on
Commit
e1151d5
·
verified ·
1 Parent(s): 9e8656e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +801 -3
README.md CHANGED
@@ -1,3 +1,801 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: modified-mit
4
+ library_name: transformers
5
+ ---
6
+ <div align="center">
7
+ <picture>
8
+ <img src="figures/kimi-logo.png" width="30%" alt="Kimi K2: Open Agentic Intellignece">
9
+ </picture>
10
+ </div>
11
+
12
+ <hr>
13
+
14
+ <div align="center" style="line-height:1">
15
+ <a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
16
+ <a href="https://github.com/moonshotai/Kimi-K2"><img alt="github" src="https://img.shields.io/badge/🤖%20Github-Kimi%20K2-ff6b6b?color=1783ff&logoColor=white"/></a>
17
+ <a href="https://www.moonshot.ai" target="_blank"><img alt="Homepage" src="https://img.shields.io/badge/Homepage-Moonshot%20AI-white?logo=Kimi&logoColor=white"/></a>
18
+ </div>
19
+
20
+ <div align="center" style="line-height: 1;">
21
+ <a href="https://huggingface.co/moonshotai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Moonshot%20AI-ffc107?color=ffc107&logoColor=white"/></a>
22
+ <a href="https://twitter.com/kimi_moonshot" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-Kimi.ai-white?logo=x&logoColor=white"/></a>
23
+ <a href="https://discord.gg/TYU2fdJykW" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-Kimi.ai-white?logo=discord&logoColor=white"/></a>
24
+ </div>
25
+
26
+ <div align="center" style="line-height: 1;">
27
+ <a href="https://github.com/moonshotai/Kimi-K2/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Modified_MIT-f5de53?&color=f5de53"/></a>
28
+ </div>
29
+
30
+ <p align="center">
31
+ <b>📰&nbsp;&nbsp;<a href="https://moonshotai.github.io/Kimi-K2/">Tech Blog</a></b> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <b>📄&nbsp;&nbsp;Paper Link (coming soon)</b>
32
+ </p>
33
+
34
+ ## 0. Changelog
35
+
36
+ ### 2025.7.15
37
+ - We have updated our tokenizer implementation. Now special tokens like `[EOS]` can be encoded to their token ids.
38
+ - We fixed a bug in the chat template that was breaking multi-turn tool calls.
39
+
40
+ ## 1. Model Introduction
41
+
42
+ Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.
43
+
44
+ ### Key Features
45
+ - Large-Scale Training: Pre-trained a 1T parameter MoE model on 15.5T tokens with zero training instability.
46
+ - MuonClip Optimizer: We apply the Muon optimizer to an unprecedented scale, and develop novel optimization techniques to resolve instabilities while scaling up.
47
+ - Agentic Intelligence: Specifically designed for tool use, reasoning, and autonomous problem-solving.
48
+
49
+ ### Model Variants
50
+ - **Kimi-K2-Base**: The foundation model, a strong start for researchers and builders who want full control for fine-tuning and custom solutions.
51
+ - **Kimi-K2-Instruct**: The post-trained model best for drop-in, general-purpose chat and agentic experiences. It is a reflex-grade model without long thinking.
52
+
53
+ <div align="center">
54
+ <picture>
55
+ <img src="figures/banner.png" width="80%" alt="Evaluation Results">
56
+ </picture>
57
+ </div>
58
+
59
+ ## 2. Model Summary
60
+
61
+ <div align="center">
62
+
63
+
64
+ | | |
65
+ |:---:|:---:|
66
+ | **Architecture** | Mixture-of-Experts (MoE) |
67
+ | **Total Parameters** | 1T |
68
+ | **Activated Parameters** | 32B |
69
+ | **Number of Layers** (Dense layer included) | 61 |
70
+ | **Number of Dense Layers** | 1 |
71
+ | **Attention Hidden Dimension** | 7168 |
72
+ | **MoE Hidden Dimension** (per Expert) | 2048 |
73
+ | **Number of Attention Heads** | 64 |
74
+ | **Number of Experts** | 384 |
75
+ | **Selected Experts per Token** | 8 |
76
+ | **Number of Shared Experts** | 1 |
77
+ | **Vocabulary Size** | 160K |
78
+ | **Context Length** | 128K |
79
+ | **Attention Mechanism** | MLA |
80
+ | **Activation Function** | SwiGLU |
81
+ </div>
82
+
83
+ ## 3. Evaluation Results
84
+
85
+ #### Instruction model evaluation results
86
+
87
+ <div align="center">
88
+ <table>
89
+ <thead>
90
+ <tr>
91
+ <th align="center">Benchmark</th>
92
+ <th align="center">Metric</th>
93
+ <th align="center"><sup>Kimi K2 Instruct</sup></th>
94
+ <th align="center"><sup>DeepSeek-V3-0324</sup></th>
95
+ <th align="center"><sup>Qwen3-235B-A22B <br><sup>(non-thinking)</sup></sup></th>
96
+ <th align="center"><sup>Claude Sonnet 4 <br><sup>(w/o extended thinking)</sup></sup></th>
97
+ <th align="center"><sup>Claude Opus 4 <br><sup>(w/o extended thinking)</sup></sup></th>
98
+ <th align="center"><sup>GPT-4.1</sup></th>
99
+ <th align="center"><sup>Gemini 2.5 Flash <br> Preview (05-20)</sup></th>
100
+ </tr>
101
+ </thead>
102
+ <tbody>
103
+ <tr>
104
+ <td align="center" colspan=9><strong>Coding Tasks</strong></td>
105
+ </tr>
106
+ <tr>
107
+ <td align="center">LiveCodeBench v6<br><sup>(Aug 24 - May 25)</sup></td>
108
+ <td align="center">Pass@1</td>
109
+ <td align="center"><strong>53.7</strong></td>
110
+ <td align="center">46.9</td>
111
+ <td align="center">37.0</td>
112
+ <td align="center">48.5</td>
113
+ <td align="center">47.4</td>
114
+ <td align="center">44.7</td>
115
+ <td align="center">44.7</td>
116
+ </tr>
117
+ <tr>
118
+ <td align="center">OJBench</td>
119
+ <td align="center">Pass@1</td>
120
+ <td align="center"><strong>27.1</strong></td>
121
+ <td align="center">24.0</td>
122
+ <td align="center">11.3</td>
123
+ <td align="center">15.3</td>
124
+ <td align="center">19.6</td>
125
+ <td align="center">19.5</td>
126
+ <td align="center">19.5</td>
127
+ </tr>
128
+
129
+ <tr>
130
+ <td align="center">MultiPL-E</td>
131
+ <td align="center">Pass@1</td>
132
+ <td align="center"><ins><strong>85.7</strong></ins></td>
133
+ <td align="center">83.1</td>
134
+ <td align="center">78.2</td>
135
+ <td align="center">88.6</td>
136
+ <td align="center"><strong>89.6</strong></td>
137
+ <td align="center">86.7</td>
138
+ <td align="center">85.6</td>
139
+ </tr>
140
+
141
+ <tr>
142
+ <td align="center">SWE-bench Verified <br/><sup>(Agentless Coding)</sup></td>
143
+ <td align="center">Single Patch w/o Test (Acc)</td>
144
+ <td align="center"><ins><strong>51.8</strong></ins></td>
145
+ <td align="center">36.6</td>
146
+ <td align="center">39.4</td>
147
+ <td align="center">50.2</td>
148
+ <td align="center"><strong>53.0</strong></td>
149
+ <td align="center">40.8</td>
150
+ <td align="center">32.6</td>
151
+ </tr>
152
+
153
+ <tr>
154
+ <td align="center" rowspan="2">SWE-bench Verified <br/> <sup>(Agentic Coding)</sup></td>
155
+ <td align="center">Single Attempt (Acc)</td>
156
+ <td align="center"><ins><strong>65.8</strong></ins></td>
157
+ <td align="center">38.8</td>
158
+ <td align="center">34.4</td>
159
+ <td align="center"><strong>72.7</strong><sup>*</sup></td>
160
+ <td align="center">72.5<sup>*</sup></td>
161
+ <td align="center">54.6</td>
162
+ <td align="center">—</td>
163
+ </tr>
164
+
165
+ <tr>
166
+ <!--<td align="center">(Agentic Coding)</td>-->
167
+ <td align="center">Multiple Attempts (Acc)</td>
168
+ <td align="center"><ins><strong>71.6</strong></ins></td>
169
+ <td align="center">—</td>
170
+ <td align="center">—</td>
171
+ <td align="center"><strong>80.2</strong></td>
172
+ <td align="center">79.4<sup>*</sup></td>
173
+ <td align="center">—</td>
174
+ <td align="center">—</td>
175
+ </tr>
176
+
177
+ <tr>
178
+ <td align="center">SWE-bench Multilingual<br /> <sup>(Agentic Coding)</sup></td>
179
+ <td align="center">Single Attempt (Acc)</td>
180
+ <td align="center"><ins><strong>47.3</strong> </ins></td>
181
+ <td align="center">25.8</td>
182
+ <td align="center">20.9</td>
183
+ <td align="center"><strong>51.0</strong></td>
184
+ <td align="center">—</td>
185
+ <td align="center">31.5</td>
186
+ <td align="center">—</td>
187
+ </tr>
188
+
189
+ <tr>
190
+ <td align="center" rowspan="2">TerminalBench</td>
191
+ <td align="center">Inhouse Framework (Acc)</td>
192
+ <td align="center"><ins><strong>30.0</strong></ins></td>
193
+ <td align="center">—</td>
194
+ <td align="center">—</td>
195
+ <td align="center">35.5</td>
196
+ <td align="center"><strong>43.2</strong></td>
197
+ <td align="center">8.3</td>
198
+ <td align="center">—</td>
199
+ </tr>
200
+
201
+ <tr>
202
+ <!--<td align="center">TerminalBench</td>-->
203
+ <td align="center">Terminus (Acc)</td>
204
+ <td align="center"><ins><strong>25.0</strong> </ins></td>
205
+ <td align="center">16.3</td>
206
+ <td align="center">6.6</td>
207
+ <td align="center">—</td>
208
+ <td align="center">—</td>
209
+ <td align="center"><strong>30.3</strong></td>
210
+ <td align="center">16.8</td>
211
+ </tr>
212
+ <tr>
213
+ <td align="center">Aider-Polyglot</td>
214
+ <td align="center">Acc</td>
215
+ <td align="center">60.0</td>
216
+ <td align="center">55.1</td>
217
+ <td align="center"><ins><strong>61.8</strong></ins></td>
218
+ <td align="center">56.4</td>
219
+ <td align="center"><strong>70.7</strong></td>
220
+ <td align="center">52.4</td>
221
+ <td align="center">44.0</td>
222
+ </tr>
223
+ <tr>
224
+ <td align="center" colspan=9><strong>Tool Use Tasks</strong></td>
225
+ </tr>
226
+ <tr>
227
+ <td align="center">Tau2 retail</td>
228
+ <td align="center">Avg@4</td>
229
+ <td align="center"><ins><strong>70.6</strong></ins></td>
230
+ <td align="center">69.1</td>
231
+ <td align="center">57.0</td>
232
+ <td align="center">75.0</td>
233
+ <td align="center"><strong>81.8</strong></td>
234
+ <td align="center">74.8</td>
235
+ <td align="center">64.3</td>
236
+ </tr>
237
+ <tr>
238
+ <td align="center">Tau2 airline</td>
239
+ <td align="center">Avg@4</td>
240
+ <td align="center"><ins><strong>56.5</strong></ins></td>
241
+ <td align="center">39.0</td>
242
+ <td align="center">26.5</td>
243
+ <td align="center">55.5</td>
244
+ <td align="center"><strong>60.0</strong></td>
245
+ <td align="center">54.5</td>
246
+ <td align="center">42.5</td>
247
+ </tr>
248
+ <tr>
249
+ <td align="center">Tau2 telecom</td>
250
+ <td align="center">Avg@4</td>
251
+ <td align="center"><strong>65.8</strong></td>
252
+ <td align="center">32.5</td>
253
+ <td align="center">22.1</td>
254
+ <td align="center">45.2</td>
255
+ <td align="center">57.0</td>
256
+ <td align="center">38.6</td>
257
+ <td align="center">16.9</td>
258
+ </tr>
259
+ <tr>
260
+ <td align="center">AceBench</td>
261
+ <td align="center">Acc</td>
262
+ <td align="center"><ins><strong>76.5</strong></ins></td>
263
+ <td align="center">72.7</td>
264
+ <td align="center">70.5</td>
265
+ <td align="center">76.2</td>
266
+ <td align="center">75.6</td>
267
+ <td align="center"><strong>80.1</strong></td>
268
+ <td align="center">74.5</td>
269
+ </tr>
270
+ <tr>
271
+ <td align="center" colspan=9><strong>Math &amp; STEM Tasks</strong></td>
272
+ </tr>
273
+ <tr>
274
+ <td align="center">AIME 2024</td>
275
+ <td align="center">Avg@64</td>
276
+ <td align="center"><strong>69.6</strong></td>
277
+ <td align="center">59.4<sup>*</sup></td>
278
+ <td align="center">40.1<sup>*</sup></td>
279
+ <td align="center">43.4</td>
280
+ <td align="center">48.2</td>
281
+ <td align="center">46.5</td>
282
+ <td align="center">61.3</td>
283
+ </tr>
284
+ <tr>
285
+ <td align="center">AIME 2025</td>
286
+ <td align="center">Avg@64</td>
287
+ <td align="center"><strong>49.5</strong></td>
288
+ <td align="center">46.7</td>
289
+ <td align="center">24.7<sup>*</sup></td>
290
+ <td align="center">33.1<sup>*</sup></td>
291
+ <td align="center">33.9<sup>*</sup></td>
292
+ <td align="center">37.0</td>
293
+ <td align="center">46.6</td>
294
+ </tr>
295
+ <tr>
296
+ <td align="center">MATH-500</td>
297
+ <td align="center">Acc</td>
298
+ <td align="center"><strong>97.4</strong></td>
299
+ <td align="center">94.0<sup>*</sup></td>
300
+ <td align="center">91.2<sup>*</sup></td>
301
+ <td align="center">94.0</td>
302
+ <td align="center">94.4</td>
303
+ <td align="center">92.4</td>
304
+ <td align="center">95.4</td>
305
+ </tr>
306
+ <tr>
307
+ <td align="center">HMMT 2025</td>
308
+ <td align="center">Avg@32</td>
309
+ <td align="center"><strong>38.8</strong></td>
310
+ <td align="center">27.5</td>
311
+ <td align="center">11.9</td>
312
+ <td align="center">15.9</td>
313
+ <td align="center">15.9</td>
314
+ <td align="center">19.4</td>
315
+ <td align="center">34.7</td>
316
+ </tr>
317
+ <tr>
318
+ <td align="center">CNMO 2024</td>
319
+ <td align="center">Avg@16</td>
320
+ <td align="center">74.3</td>
321
+ <td align="center"><ins><strong>74.7</strong></ins></td>
322
+ <td align="center">48.6</td>
323
+ <td align="center">60.4</td>
324
+ <td align="center">57.6</td>
325
+ <td align="center">56.6</td>
326
+ <td align="center"><strong>75.0</strong></td>
327
+ </tr>
328
+ <tr>
329
+ <td align="center">PolyMath-en</td>
330
+ <td align="center">Avg@4</td>
331
+ <td align="center"><strong>65.1</strong></td>
332
+ <td align="center">59.5</td>
333
+ <td align="center">51.9</td>
334
+ <td align="center">52.8</td>
335
+ <td align="center">49.8</td>
336
+ <td align="center">54.0</td>
337
+ <td align="center">49.9</td>
338
+ </tr>
339
+
340
+ <tr>
341
+ <td align="center">ZebraLogic</td>
342
+ <td align="center">Acc</td>
343
+ <td align="center"><strong>89.0</strong></td>
344
+ <td align="center">84.0</td>
345
+ <td align="center">37.7<sup>*</sup></td>
346
+ <td align="center">73.7</td>
347
+ <td align="center">59.3</td>
348
+ <td align="center">58.5</td>
349
+ <td align="center">57.9</td>
350
+ </tr>
351
+
352
+ <tr>
353
+ <td align="center">AutoLogi</td>
354
+ <td align="center">Acc</td>
355
+ <td align="center"><ins><strong>89.5</strong></ins></td>
356
+ <td align="center">88.9</td>
357
+ <td align="center">83.3</td>
358
+ <td align="center"><strong>89.8</strong></td>
359
+ <td align="center">86.1</td>
360
+ <td align="center">88.2</td>
361
+ <td align="center">84.1</td>
362
+ </tr>
363
+
364
+ <tr>
365
+ <td align="center">GPQA-Diamond</td>
366
+ <td align="center">Avg@8</td>
367
+ <td align="center"><strong>75.1</strong></td>
368
+ <td align="center">68.4<sup>*</sup></td>
369
+ <td align="center">62.9<sup>*</sup></td>
370
+ <td align="center">70.0<sup>*</sup></td>
371
+ <td align="center">74.9<sup>*</sup></td>
372
+ <td align="center">66.3</td>
373
+ <td align="center">68.2</td>
374
+ </tr>
375
+
376
+ <tr>
377
+ <td align="center">SuperGPQA</td>
378
+ <td align="center">Acc</td>
379
+ <td align="center"><strong>57.2</strong></td>
380
+ <td align="center">53.7</td>
381
+ <td align="center">50.2</td>
382
+ <td align="center">55.7</td>
383
+ <td align="center">56.5</td>
384
+ <td align="center">50.8</td>
385
+ <td align="center">49.6</td>
386
+ </tr>
387
+
388
+ <tr>
389
+ <td align="center">Humanity's Last Exam<br><sup>(Text Only)</sup></td>
390
+ <td align="center">-</td>
391
+ <td align="center">4.7</td>
392
+ <td align="center">5.2</td>
393
+ <td align="center"><ins><strong>5.7</strong></ins></td>
394
+ <td align="center">5.8</td>
395
+ <td align="center"><strong>7.1</strong></td>
396
+ <td align="center">3.7</td>
397
+ <td align="center">5.6</td>
398
+ </tr>
399
+
400
+ <tr>
401
+ <td align="center" colspan=9><strong>General Tasks</strong></td>
402
+ </tr>
403
+
404
+ <tr>
405
+ <td align="center">MMLU</td>
406
+ <td align="center">EM</td>
407
+ <td align="center"><ins><strong>89.5</strong></ins></td>
408
+ <td align="center">89.4</td>
409
+ <td align="center">87.0</td>
410
+ <td align="center">91.5</td>
411
+ <td align="center"><strong>92.9</strong></td>
412
+ <td align="center">90.4</td>
413
+ <td align="center">90.1</td>
414
+ </tr>
415
+
416
+ <tr>
417
+ <td align="center">MMLU-Redux</td>
418
+ <td align="center">EM</td>
419
+ <td align="center"><ins><strong>92.7</strong></ins></td>
420
+ <td align="center">90.5</td>
421
+ <td align="center">89.2</td>
422
+ <td align="center">93.6</td>
423
+ <td align="center"><strong>94.2</strong></td>
424
+ <td align="center">92.4</td>
425
+ <td align="center">90.6</td>
426
+ </tr>
427
+
428
+ <tr>
429
+ <td align="center">MMLU-Pro</td>
430
+ <td align="center">EM</td>
431
+ <td align="center">81.1</td>
432
+ <td align="center"><ins><strong>81.2</strong></ins><sup>*</sup></td>
433
+ <td align="center">77.3</td>
434
+ <td align="center">83.7</td>
435
+ <td align="center"><strong>86.6</strong></td>
436
+ <td align="center">81.8</td>
437
+ <td align="center">79.4</td>
438
+ </tr>
439
+
440
+ <tr>
441
+ <td align="center">IFEval</td>
442
+ <td align="center">Prompt Strict</td>
443
+ <td align="center"><strong>89.8</strong></td>
444
+ <td align="center">81.1</td>
445
+ <td align="center">83.2<sup>*</sup></td>
446
+ <td align="center">87.6</td>
447
+ <td align="center">87.4</td>
448
+ <td align="center">88.0</td>
449
+ <td align="center">84.3</td>
450
+ </tr>
451
+
452
+ <tr>
453
+ <td align="center">Multi-Challenge</td>
454
+ <td align="center">Acc</td>
455
+ <td align="center"><strong>54.1</strong></td>
456
+ <td align="center">31.4</td>
457
+ <td align="center">34.0</td>
458
+ <td align="center">46.8</td>
459
+ <td align="center">49.0</td>
460
+ <td align="center">36.4</td>
461
+ <td align="center">39.5</td>
462
+ </tr>
463
+
464
+ <tr>
465
+ <td align="center">SimpleQA</td>
466
+ <td align="center">Correct</td>
467
+ <td align="center"><ins><strong>31.0</strong></ins></td>
468
+ <td align="center">27.7</td>
469
+ <td align="center">13.2</td>
470
+ <td align="center">15.9</td>
471
+ <td align="center">22.8</td>
472
+ <td align="center"><strong>42.3</strong></td>
473
+ <td align="center">23.3</td>
474
+ </tr>
475
+
476
+ <tr>
477
+ <td align="center">Livebench</td>
478
+ <td align="center">Pass@1</td>
479
+ <td align="center"><strong>76.4</strong></td>
480
+ <td align="center">72.4</td>
481
+ <td align="center">67.6</td>
482
+ <td align="center">74.8</td>
483
+ <td align="center">74.6</td>
484
+ <td align="center">69.8</td>
485
+ <td align="center">67.8</td>
486
+ </tr>
487
+ </tbody>
488
+ </table>
489
+ </div>
490
+ <sup>
491
+ • Bold denotes global SOTA, and underlined denotes open-source SOTA.
492
+ </sup><br/><sup>
493
+ • Data points marked with * are taken directly from the model's tech report or blog.
494
+ </sup><br/><sup>
495
+ • All metrics, except for SWE-bench Verified (Agentless), are evaluated with an 8k output token length. SWE-bench Verified (Agentless) is limited to a 16k output token length.
496
+ </sup><br/><sup>
497
+ • Kimi K2 achieves 65.8% pass@1 on the SWE-bench Verified tests with bash/editor tools (single-attempt patches, no test-time compute). It also achieves a 47.3% pass@1 on the SWE-bench Multilingual tests under the same conditions. Additionally, we report results on SWE-bench Verified tests (71.6%) that leverage parallel test-time compute by sampling multiple sequences and selecting the single best via an internal scoring model.
498
+ </sup><br/><sup>
499
+ • To ensure the stability of the evaluation, we employed avg@k on the AIME, HMMT, CNMO, PolyMath-en, GPQA-Diamond, EvalPlus, Tau2.
500
+ </sup><br/><sup>
501
+ • Some data points have been omitted due to prohibitively expensive evaluation costs.
502
+ </sup>
503
+
504
+ ---
505
+
506
+ #### Base model evaluation results
507
+
508
+ <div align="center">
509
+
510
+ <table>
511
+ <thead>
512
+ <tr>
513
+ <th align="center">Benchmark</th>
514
+ <th align="center">Metric</th>
515
+ <th align="center">Shot</th>
516
+ <th align="center">Kimi K2 Base</th>
517
+ <th align="center">Deepseek-V3-Base</th>
518
+ <th align="center">Qwen2.5-72B</th>
519
+ <th align="center">Llama 4 Maverick</th>
520
+ </tr>
521
+ </thead>
522
+ <tbody>
523
+ <tr>
524
+ <td align="center" colspan="7"><strong>General Tasks</strong></td>
525
+ </tr>
526
+ <tr>
527
+ <td align="center">MMLU</td>
528
+ <td align="center">EM</td>
529
+ <td align="center">5-shot</td>
530
+ <td align="center"><strong>87.8</strong></td>
531
+ <td align="center">87.1</td>
532
+ <td align="center">86.1</td>
533
+ <td align="center">84.9</td>
534
+ </tr>
535
+ <tr>
536
+ <td align="center">MMLU-pro</td>
537
+ <td align="center">EM</td>
538
+ <td align="center">5-shot</td>
539
+ <td align="center"><strong>69.2</strong></td>
540
+ <td align="center">60.6</td>
541
+ <td align="center">62.8</td>
542
+ <td align="center">63.5</td>
543
+ </tr>
544
+ <tr>
545
+ <td align="center">MMLU-redux-2.0</td>
546
+ <td align="center">EM</td>
547
+ <td align="center">5-shot</td>
548
+ <td align="center"><strong>90.2</strong></td>
549
+ <td align="center">89.5</td>
550
+ <td align="center">87.8</td>
551
+ <td align="center">88.2</td>
552
+ </tr>
553
+ <tr>
554
+ <td align="center">SimpleQA</td>
555
+ <td align="center">Correct</td>
556
+ <td align="center">5-shot</td>
557
+ <td align="center"><strong>35.3</strong></td>
558
+ <td align="center">26.5</td>
559
+ <td align="center">10.3</td>
560
+ <td align="center">23.7</td>
561
+ </tr>
562
+ <tr>
563
+ <td align="center">TriviaQA</td>
564
+ <td align="center">EM</td>
565
+ <td align="center">5-shot</td>
566
+ <td align="center"><strong>85.1</strong></td>
567
+ <td align="center">84.1</td>
568
+ <td align="center">76.0</td>
569
+ <td align="center">79.3</td>
570
+ </tr>
571
+ <tr>
572
+ <td align="center">GPQA-Diamond</td>
573
+ <td align="center">Avg@8</td>
574
+ <td align="center">5-shot</td>
575
+ <td align="center">48.1</td>
576
+ <td align="center"><strong>50.5</strong></td>
577
+ <td align="center">40.8</td>
578
+ <td align="center">49.4</td>
579
+ </tr>
580
+ <tr>
581
+ <td align="center">SuperGPQA</td>
582
+ <td align="center">EM</td>
583
+ <td align="center">5-shot</td>
584
+ <td align="center"><strong>44.7</strong></td>
585
+ <td align="center">39.2</td>
586
+ <td align="center">34.2</td>
587
+ <td align="center">38.8</td>
588
+ </tr>
589
+ <tr>
590
+ <td align="center" colspan="7"><strong>Coding Tasks</strong></td>
591
+ </tr>
592
+ <tr>
593
+ <td align="center">LiveCodeBench v6</td>
594
+ <td align="center">Pass@1</td>
595
+ <td align="center">1-shot</td>
596
+ <td align="center"><strong>26.3</strong></td>
597
+ <td align="center">22.9</td>
598
+ <td align="center">21.1</td>
599
+ <td align="center">25.1</td>
600
+ </tr>
601
+ <tr>
602
+ <td align="center">EvalPlus</td>
603
+ <td align="center">Pass@1</td>
604
+ <td align="center">-</td>
605
+ <td align="center"><strong>80.3</strong></td>
606
+ <td align="center">65.6</td>
607
+ <td align="center">66.0</td>
608
+ <td align="center">65.5</td>
609
+ </tr>
610
+ <tr>
611
+ <td align="center" colspan="7"><strong>Mathematics Tasks</strong></td>
612
+ </tr>
613
+ <tr>
614
+ <td align="center">MATH</td>
615
+ <td align="center">EM</td>
616
+ <td align="center">4-shot</td>
617
+ <td align="center"><strong>70.2</strong></td>
618
+ <td align="center">60.1</td>
619
+ <td align="center">61.0</td>
620
+ <td align="center">63.0</td>
621
+ </tr>
622
+ <tr>
623
+ <td align="center">GSM8k</td>
624
+ <td align="center">EM</td>
625
+ <td align="center">8-shot</td>
626
+ <td align="center"><strong>92.1</strong></td>
627
+ <td align="center">91.7</td>
628
+ <td align="center">90.4</td>
629
+ <td align="center">86.3</td>
630
+ </tr>
631
+ <tr>
632
+ <td align="center" colspan="7"><strong>Chinese Tasks</strong></td>
633
+ </tr>
634
+ <tr>
635
+ <td align="center">C-Eval</td>
636
+ <td align="center">EM</td>
637
+ <td align="center">5-shot</td>
638
+ <td align="center"><strong>92.5</strong></td>
639
+ <td align="center">90.0</td>
640
+ <td align="center">90.9</td>
641
+ <td align="center">80.9</td>
642
+ </tr>
643
+ <tr>
644
+ <td align="center">CSimpleQA</td>
645
+ <td align="center">Correct</td>
646
+ <td align="center">5-shot</td>
647
+ <td align="center"><strong>77.6</strong></td>
648
+ <td align="center">72.1</td>
649
+ <td align="center">50.5</td>
650
+ <td align="center">53.5</td>
651
+ </tr>
652
+ </tbody>
653
+ </table>
654
+ </div>
655
+ <sup>
656
+ • We only evaluate open-source pretrained models in this work. We report results for Qwen2.5-72B because the base checkpoint for Qwen3-235B-A22B was not open-sourced at the time of our study.
657
+ </sup><br/><sup>
658
+ • All models are evaluated using the same evaluation protocol.
659
+
660
+ </sup>
661
+
662
+
663
+ ## 4. Deployment
664
+ > [!Note]
665
+ > You can access Kimi K2's API on https://platform.moonshot.ai , we provide OpenAI/Anthropic-compatible API for you.
666
+ >
667
+ > The Anthropic-compatible API maps temperature by `real_temperature = request_temperature * 0.6` for better compatible with existing applications.
668
+
669
+ Our model checkpoints are stored in the block-fp8 format, you can find it on [Huggingface](https://huggingface.co/moonshotai/Kimi-K2-Instruct).
670
+
671
+ Currently, Kimi-K2 is recommended to run on the following inference engines:
672
+
673
+ * vLLM
674
+ * SGLang
675
+ * KTransformers
676
+ * TensorRT-LLM
677
+
678
+ Deployment examples for vLLM and SGLang can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
679
+
680
+ ---
681
+
682
+ ## 5. Model Usage
683
+
684
+ ### Chat Completion
685
+
686
+ Once the local inference service is up, you can interact with it through the chat endpoint:
687
+
688
+ ```python
689
+ def simple_chat(client: OpenAI, model_name: str):
690
+ messages = [
691
+ {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
692
+ {"role": "user", "content": [{"type": "text", "text": "Please give a brief self-introduction."}]},
693
+ ]
694
+ response = client.chat.completions.create(
695
+ model=model_name,
696
+ messages=messages,
697
+ stream=False,
698
+ temperature=0.6,
699
+ max_tokens=256
700
+ )
701
+ print(response.choices[0].message.content)
702
+ ```
703
+
704
+ > [!NOTE]
705
+ > The recommended temperature for Kimi-K2-Instruct is `temperature = 0.6`.
706
+ > If no special instructions are required, the system prompt above is a good default.
707
+
708
+ ---
709
+
710
+ ### Tool Calling
711
+
712
+ Kimi-K2-Instruct has strong tool-calling capabilities.
713
+ To enable them, you need to pass the list of available tools in each request, then the model will autonomously decide when and how to invoke them.
714
+
715
+ The following example demonstrates calling a weather tool end-to-end:
716
+
717
+ ```python
718
+ # Your tool implementation
719
+ def get_weather(city: str) -> dict:
720
+ return {"weather": "Sunny"}
721
+
722
+ # Tool schema definition
723
+ tools = [{
724
+ "type": "function",
725
+ "function": {
726
+ "name": "get_weather",
727
+ "description": "Retrieve current weather information. Call this when the user asks about the weather.",
728
+ "parameters": {
729
+ "type": "object",
730
+ "required": ["city"],
731
+ "properties": {
732
+ "city": {
733
+ "type": "string",
734
+ "description": "Name of the city"
735
+ }
736
+ }
737
+ }
738
+ }
739
+ }]
740
+
741
+ # Map tool names to their implementations
742
+ tool_map = {
743
+ "get_weather": get_weather
744
+ }
745
+
746
+ def tool_call_with_client(client: OpenAI, model_name: str):
747
+ messages = [
748
+ {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
749
+ {"role": "user", "content": "What's the weather like in Beijing today? Use the tool to check."}
750
+ ]
751
+ finish_reason = None
752
+ while finish_reason is None or finish_reason == "tool_calls":
753
+ completion = client.chat.completions.create(
754
+ model=model_name,
755
+ messages=messages,
756
+ temperature=0.6,
757
+ tools=tools, # tool list defined above
758
+ tool_choice="auto"
759
+ )
760
+ choice = completion.choices[0]
761
+ finish_reason = choice.finish_reason
762
+ if finish_reason == "tool_calls":
763
+ messages.append(choice.message)
764
+ for tool_call in choice.message.tool_calls:
765
+ tool_call_name = tool_call.function.name
766
+ tool_call_arguments = json.loads(tool_call.function.arguments)
767
+ tool_function = tool_map[tool_call_name]
768
+ tool_result = tool_function(**tool_call_arguments)
769
+ print("tool_result:", tool_result)
770
+
771
+ messages.append({
772
+ "role": "tool",
773
+ "tool_call_id": tool_call.id,
774
+ "name": tool_call_name,
775
+ "content": json.dumps(tool_result)
776
+ })
777
+ print("-" * 100)
778
+ print(choice.message.content)
779
+ ```
780
+
781
+ The `tool_call_with_client` function implements the pipeline from user query to tool execution.
782
+ This pipeline requires the inference engine to support Kimi-K2’s native tool-parsing logic.
783
+ For streaming output and manual tool-parsing, see the [Tool Calling Guide](docs/tool_call_guidance.md).
784
+
785
+ ---
786
+
787
+ ## 6. License
788
+
789
+ Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
790
+
791
+ ---
792
+
793
+ ## 7. Third Party Notices
794
+
795
+ See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
796
+
797
+ ---
798
+
799
+ ## 7. Contact Us
800
+
801
+ If you have any questions, please reach out at [[email protected]](mailto:[email protected]).