yangapku commited on
Commit
8428b10
·
verified ·
1 Parent(s): 67b0e0c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -94,7 +94,7 @@ print("thinking content:", thinking_content)
94
  print("content:", content)
95
  ```
96
 
97
- For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
98
  - SGLang:
99
  ```shell
100
  python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B --reasoning-parser qwen3
@@ -104,7 +104,7 @@ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create
104
  vllm serve Qwen/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1
105
  ```
106
 
107
- For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
108
 
109
  ## Switching Between Thinking and Non-Thinking Mode
110
 
@@ -278,7 +278,7 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
278
  {
279
  ...,
280
  "rope_scaling": {
281
- "type": "yarn",
282
  "factor": 4.0,
283
  "original_max_position_embeddings": 32768
284
  }
@@ -290,12 +290,12 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
290
 
291
  For `vllm`, you can use
292
  ```shell
293
- vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
294
  ```
295
 
296
  For `sglang`, you can use
297
  ```shell
298
- python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
299
  ```
300
 
301
  For `llama-server` from `llama.cpp`, you can use
 
94
  print("content:", content)
95
  ```
96
 
97
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
98
  - SGLang:
99
  ```shell
100
  python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B --reasoning-parser qwen3
 
104
  vllm serve Qwen/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1
105
  ```
106
 
107
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
108
 
109
  ## Switching Between Thinking and Non-Thinking Mode
110
 
 
278
  {
279
  ...,
280
  "rope_scaling": {
281
+ "rope_type": "yarn",
282
  "factor": 4.0,
283
  "original_max_position_embeddings": 32768
284
  }
 
290
 
291
  For `vllm`, you can use
292
  ```shell
293
+ vllm serve ... --rope-scaling '{"type":"rope_type","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
294
  ```
295
 
296
  For `sglang`, you can use
297
  ```shell
298
+ python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
299
  ```
300
 
301
  For `llama-server` from `llama.cpp`, you can use