shunxing1234 commited on
Commit
fa664c9
·
verified ·
1 Parent(s): fb7b62e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -3
README.md CHANGED
@@ -1,3 +1,81 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ <div align="center">
5
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/61ee40a269351366e29972ad/KIYEa1c_WJEWPpeS0L_k1.png" width="100%" alt="Kwaipilot" />
6
+ </div>
7
+
8
+ <hr>
9
+
10
+
11
+ # News
12
+
13
+ 🔥 We’re thrilled to announce the release of **KAT-Dev-72B-Exp**, our latest and most powerful model yet!
14
+
15
+ 🔥 You can now try our **strongest** proprietary coder model **KAT-Coder** directly on the [**StreamLake**](https://www.streamlake.ai/product/kat-coder) platform **for free**.
16
+
17
+ # Highlights
18
+ **KAT-Dev-72B-Exp** is an open-source 72B-parameter model for software engineering tasks.
19
+
20
+ On SWE-Bench Verified, **KAT-Dev-72B-Exp** achieves **74.6%** accuracy ⚡ — **when evaluated strictly with the SWE-agent scaffold**.
21
+
22
+ **KAT-Dev-72B-Exp** is the experimental reinforcement-learning version of the KAT-Coder model. Through this open-source release, we aim to reveal the technical innovations behind KAT-Coder’s large-scale RL to developers and researchers.
23
+
24
+
25
+ ![Kim 2025-10-10 165138](https://cdn-uploads.huggingface.co/production/uploads/61ee40a269351366e29972ad/-1nx5HYc-wTjUFNbf-GfO.png)
26
+
27
+ # Introduction
28
+
29
+ We rewrote the attention kernel and redesigned the training engine for shared prefix trajectories to achieve highly efficient RL training, especially for scaffolds leveraging context management.
30
+
31
+ Furthermore, to prevent exploration collapse observed in RL training, we reshaped advantage distribution based on pass rates: amplifying the advantage scale of highly exploratory groups while reducing that of low-exploration ones.
32
+
33
+
34
+ # Quickstart
35
+
36
+ ```python
37
+ from transformers import AutoModelForCausalLM, AutoTokenizer
38
+
39
+ model_name = "KAT-Dev-72B-Exp"
40
+
41
+ # load the tokenizer and the model
42
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
43
+ model = AutoModelForCausalLM.from_pretrained(
44
+ model_name,
45
+ torch_dtype="auto",
46
+ device_map="auto"
47
+ )
48
+
49
+ # prepare the model input
50
+ prompt = "Give me a short introduction to large language model."
51
+ messages = [
52
+ {"role": "user", "content": prompt}
53
+ ]
54
+ text = tokenizer.apply_chat_template(
55
+ messages,
56
+ tokenize=False,
57
+ add_generation_prompt=True,
58
+ )
59
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
60
+
61
+ # conduct text completion
62
+ generated_ids = model.generate(
63
+ **model_inputs,
64
+ max_new_tokens=65536
65
+ )
66
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
67
+
68
+ content = tokenizer.decode(output_ids, skip_special_tokens=True)
69
+
70
+ print("content:", content)
71
+ ```
72
+
73
+ # SWE agent Evaluation Parameters
74
+
75
+ ```
76
+ temperature: 0.6
77
+ max_turns: 150
78
+ history_processors.n: 100
79
+ ```
80
+
81
+ For full settings please refer to inference.yaml