Qwen
/

Text Generation
Transformers
Safetensors
qwen3_moe
conversational
hzhwcmhf commited on
Commit
bb90a70
·
verified ·
1 Parent(s): 2a8783f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -85,21 +85,23 @@ print("thinking content:", thinking_content)
85
  print("content:", content)
86
  ```
87
 
88
- For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
89
- - vLLM:
90
  ```shell
91
- vllm serve Qwen/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1
92
  ```
93
- - SGLang:
94
  ```shell
95
- python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B --reasoning-parser deepseek-r1
96
  ```
97
 
 
 
98
  ## Switching Between Thinking and Non-Thinking Mode
99
 
100
  > [!TIP]
101
- > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
102
- > Please refer to our documentation for [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) and [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) users.
103
 
104
  ### `enable_thinking=True`
105
 
 
85
  print("content:", content)
86
  ```
87
 
88
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
89
+ - SGLang:
90
  ```shell
91
+ python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B --reasoning-parser qwen3
92
  ```
93
+ - vLLM:
94
  ```shell
95
+ vllm serve Qwen/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1
96
  ```
97
 
98
+ For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
99
+
100
  ## Switching Between Thinking and Non-Thinking Mode
101
 
102
  > [!TIP]
103
+ > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
104
+ > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
105
 
106
  ### `enable_thinking=True`
107