Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,19 @@ language:
|
|
17 |
- **License:** apache-2.0
|
18 |
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
|
19 |
|
20 |
-
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
- **License:** apache-2.0
|
18 |
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
|
19 |
|
|
|
20 |
|
21 |
+
```python
|
22 |
+
messages = [
|
23 |
+
{"role": "system", "content": "reasoning language: French\n\nYou are a helpful assistant that can solve mathematical problems."},
|
24 |
+
{"role": "user", "content": "Résout cette equation pour un élève en classe de seconde : x^4 + 2 = 0."},
|
25 |
+
]
|
26 |
+
inputs = tokenizer.apply_chat_template(
|
27 |
+
messages,
|
28 |
+
add_generation_prompt = True,
|
29 |
+
return_tensors = "pt",
|
30 |
+
return_dict = True,
|
31 |
+
reasoning_effort = "low",
|
32 |
+
).to(model.device)
|
33 |
+
from transformers import TextStreamer
|
34 |
+
_ = model.generate(**inputs, max_new_tokens = 128, streamer = TextStreamer(tokenizer))
|
35 |
+
```
|