yangapku commited on
Commit
0412748
·
verified ·
1 Parent(s): 58ccb6e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -92,7 +92,7 @@ print("thinking content:", thinking_content)
92
  print("content:", content)
93
  ```
94
 
95
- For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
96
  - SGLang:
97
  ```shell
98
  python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B --reasoning-parser qwen3
@@ -102,7 +102,7 @@ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create
102
  vllm serve Qwen/Qwen3-235B-A22B --enable-reasoning --reasoning-parser deepseek_r1
103
  ```
104
 
105
- For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
106
 
107
  ## Switching Between Thinking and Non-Thinking Mode
108
 
@@ -276,7 +276,7 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
276
  {
277
  ...,
278
  "rope_scaling": {
279
- "type": "yarn",
280
  "factor": 4.0,
281
  "original_max_position_embeddings": 32768
282
  }
@@ -288,12 +288,12 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
288
 
289
  For `vllm`, you can use
290
  ```shell
291
- vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
292
  ```
293
 
294
  For `sglang`, you can use
295
  ```shell
296
- python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
297
  ```
298
 
299
  For `llama-server` from `llama.cpp`, you can use
 
92
  print("content:", content)
93
  ```
94
 
95
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
96
  - SGLang:
97
  ```shell
98
  python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B --reasoning-parser qwen3
 
102
  vllm serve Qwen/Qwen3-235B-A22B --enable-reasoning --reasoning-parser deepseek_r1
103
  ```
104
 
105
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
106
 
107
  ## Switching Between Thinking and Non-Thinking Mode
108
 
 
276
  {
277
  ...,
278
  "rope_scaling": {
279
+ "rope_type": "yarn",
280
  "factor": 4.0,
281
  "original_max_position_embeddings": 32768
282
  }
 
288
 
289
  For `vllm`, you can use
290
  ```shell
291
+ vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
292
  ```
293
 
294
  For `sglang`, you can use
295
  ```shell
296
+ python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
297
  ```
298
 
299
  For `llama-server` from `llama.cpp`, you can use