Update README.md
Browse files
README.md
CHANGED
@@ -109,7 +109,7 @@ print (f"user prompt: {prompt}")
|
|
109 |
print (f"model thinking: {think_content}")
|
110 |
print (f"model answer: {answer_content}")
|
111 |
```
|
112 |
-
|
113 |
|
114 |
## 🔧 Post-training pipeline
|
115 |
|
|
|
109 |
print (f"model thinking: {think_content}")
|
110 |
print (f"model answer: {answer_content}")
|
111 |
```
|
112 |
+
> Note: We have included the system prompt in the tokenizer configuration, as it was used during both the SFT and RL stages. To ensure consistent output quality, we recommend including the same system prompt during actual usage; otherwise, the model's responses may be significantly affected.
|
113 |
|
114 |
## 🔧 Post-training pipeline
|
115 |
|