littlebird13 commited on
Commit
c7dfecf
·
verified ·
1 Parent(s): 6a73752

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -7
README.md CHANGED
@@ -23,6 +23,9 @@ This repo contains the FP8 version of **Qwen3-1.7B**, which has the following fe
23
 
24
  For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
25
 
 
 
 
26
  ## Quickstart
27
 
28
  The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
@@ -80,16 +83,18 @@ print("thinking content:", thinking_content)
80
  print("content:", content)
81
  ```
82
 
83
- For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
84
- - vLLM:
85
  ```shell
86
- vllm serve Qwen/Qwen3-1.7B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
87
  ```
88
- - SGLang:
89
  ```shell
90
- python -m sglang.launch_server --model-path Qwen/Qwen3-1.7B-FP8 --reasoning-parser deepseek-r1
91
  ```
92
 
 
 
93
  ## Note on FP8
94
 
95
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
@@ -125,8 +130,8 @@ However, please pay attention to the following known issues:
125
  ## Switching Between Thinking and Non-Thinking Mode
126
 
127
  > [!TIP]
128
- > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
129
- > Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
130
 
131
  ### `enable_thinking=True`
132
 
 
23
 
24
  For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
25
 
26
+ > [!TIP]
27
+ > If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5.
28
+
29
  ## Quickstart
30
 
31
  The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
 
83
  print("content:", content)
84
  ```
85
 
86
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
87
+ - SGLang:
88
  ```shell
89
+ python -m sglang.launch_server --model-path Qwen/Qwen3-1.7B-FP8 --reasoning-parser qwen3
90
  ```
91
+ - vLLM:
92
  ```shell
93
+ vllm serve Qwen/Qwen3-1.7B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
94
  ```
95
 
96
+ For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
97
+
98
  ## Note on FP8
99
 
100
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
 
130
  ## Switching Between Thinking and Non-Thinking Mode
131
 
132
  > [!TIP]
133
+ > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
134
+ > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
135
 
136
  ### `enable_thinking=True`
137