Safetensors
GGUF
qwen3
conversational
File size: 2,304 Bytes
da3af4f
 
 
 
8aa335b
da3af4f
 
 
 
e748416
f217e61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
060ba49
f217e61
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: apache-2.0
datasets:
- NousResearch/Hermes-3-Dataset
- HuggingFaceTB/everyday-conversations-llama3.1-2k
base_model:
- Qwen/Qwen3-4B
---

This Qwen 3 4B model was fine-tuned on the Hermes 3 dataset to enhance its general chatting capabilities while retaining Qwen's Reasoning capabilities.

## transformers
As the qwen team suggested to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ertghiu256/Qwen3-Hermes-4b"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

# parsing thinking content
try:
    # rindex finding 151668 (</think>)
    index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    index = 0

thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")

print("thinking content:", thinking_content)
print("content:", content)
```

## vllm
Run this command
```bash
vllm serve ertghiu256/Qwen3-Hermes-4b --enable-reasoning --reasoning-parser deepseek_r1
```

## Sglang
Run this command
```bash
python -m sglang.launch_server --model-path ertghiu256/Qwen3-Hermes-4b --reasoning-parser deepseek-r1
```

## llama.cpp
Run this command
```bash
llama-server --hf-repo ertghiu256/Qwen3-Hermes-4b
```
or
```bash
llama-cli --hf ertghiu256/Qwen3-Hermes-4b
```

## ollama
Run this command 
```bash
ollama run hf.co/ertghiu256/Qwen3-Hermes-4b:Q4_K_M 
```

## lm studio 
Search 
```
ertghiu256/Qwen3-Hermes-4b
```
in the lm studio model search list then download