Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,8 @@ Qwen3 is the latest generation of large language models in Qwen series, offering
|
|
19 |
- Number of Paramaters (Non-Embedding): 29.9B
|
20 |
- Number of Layers: 48
|
21 |
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
|
|
|
|
|
22 |
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
|
23 |
|
24 |
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
@@ -202,7 +204,6 @@ Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https
|
|
202 |
|
203 |
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
|
204 |
```python
|
205 |
-
import os
|
206 |
from qwen_agent.agents import Assistant
|
207 |
|
208 |
# Define LLM
|
@@ -232,6 +233,10 @@ tools = [
|
|
232 |
'command': 'uvx',
|
233 |
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
|
234 |
},
|
|
|
|
|
|
|
|
|
235 |
}
|
236 |
},
|
237 |
'code_interpreter', # Built-in tools
|
@@ -241,7 +246,7 @@ tools = [
|
|
241 |
bot = Assistant(llm=llm_cfg, function_list=tools)
|
242 |
|
243 |
# Streaming generation
|
244 |
-
messages = [{'role': 'user', 'content': '
|
245 |
for responses in bot.run(messages=messages):
|
246 |
pass
|
247 |
print(responses)
|
|
|
19 |
- Number of Paramaters (Non-Embedding): 29.9B
|
20 |
- Number of Layers: 48
|
21 |
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
|
22 |
+
- Number of Experts: 128
|
23 |
+
- Number of Activated Experts: 8
|
24 |
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
|
25 |
|
26 |
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
|
|
204 |
|
205 |
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
|
206 |
```python
|
|
|
207 |
from qwen_agent.agents import Assistant
|
208 |
|
209 |
# Define LLM
|
|
|
233 |
'command': 'uvx',
|
234 |
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
|
235 |
},
|
236 |
+
"fetch": {
|
237 |
+
"command": "uvx",
|
238 |
+
"args": ["mcp-server-fetch"]
|
239 |
+
}
|
240 |
}
|
241 |
},
|
242 |
'code_interpreter', # Built-in tools
|
|
|
246 |
bot = Assistant(llm=llm_cfg, function_list=tools)
|
247 |
|
248 |
# Streaming generation
|
249 |
+
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
|
250 |
for responses in bot.run(messages=messages):
|
251 |
pass
|
252 |
print(responses)
|