Improve model card: Add paper link, license, pipeline tag, and languages
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,9 +1,51 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
4 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
|
|
|
|
|
|
|
7 |
|
8 |
<p align="center">
|
9 |
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
|
@@ -42,6 +84,17 @@ Hunyuan-MT-Chimera-7B-fp8 was produced by [AngelSlim](https://github.com/Tencent
|
|
42 |
<br>
|
43 |
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
|
47 |
## 模型链接
|
@@ -101,10 +154,10 @@ First, please install transformers, recommends v4.56.0
|
|
101 |
pip install transformers==4.56.0
|
102 |
```
|
103 |
|
104 |
-
The following code snippet shows how to use the transformers library to load and apply the model.
|
105 |
-
|
106 |
*!!! If you want to load fp8 model with transformers, you need to change the name"ignored_layers" in config.json to "ignore" and upgrade the compressed-tensors to compressed-tensors-0.11.0.*
|
107 |
|
|
|
|
|
108 |
we use tencent/Hunyuan-MT-7B for example
|
109 |
|
110 |
```python
|
@@ -116,7 +169,9 @@ model_name_or_path = "tencent/Hunyuan-MT-7B"
|
|
116 |
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
|
117 |
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
|
118 |
messages = [
|
119 |
-
{"role": "user", "content": "Translate the following segment into Chinese, without additional explanation
|
|
|
|
|
120 |
]
|
121 |
tokenized_chat = tokenizer.apply_chat_template(
|
122 |
messages,
|
@@ -182,6 +237,331 @@ Supported languages:
|
|
182 |
| Uyghur | ug | 维吾尔语 |
|
183 |
| Cantonese | yue | 粤语 |
|
184 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
185 |
Citing Hunyuan-MT:
|
186 |
|
187 |
```bibtex
|
@@ -191,4 +571,8 @@ Citing Hunyuan-MT:
|
|
191 |
howpublished={\url{https://github.com/Tencent-Hunyuan/Hunyuan-MT}},
|
192 |
year={2025}
|
193 |
}
|
194 |
-
```
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
pipeline_tag: translation
|
4 |
+
license: apache-2.0
|
5 |
+
languages:
|
6 |
+
- zh
|
7 |
+
- en
|
8 |
+
- fr
|
9 |
+
- pt
|
10 |
+
- es
|
11 |
+
- ja
|
12 |
+
- tr
|
13 |
+
- ru
|
14 |
+
- ar
|
15 |
+
- ko
|
16 |
+
- th
|
17 |
+
- it
|
18 |
+
- de
|
19 |
+
- vi
|
20 |
+
- ms
|
21 |
+
- id
|
22 |
+
- tl
|
23 |
+
- hi
|
24 |
+
- zh-Hant
|
25 |
+
- pl
|
26 |
+
- cs
|
27 |
+
- nl
|
28 |
+
- km
|
29 |
+
- my
|
30 |
+
- fa
|
31 |
+
- gu
|
32 |
+
- ur
|
33 |
+
- te
|
34 |
+
- mr
|
35 |
+
- he
|
36 |
+
- bn
|
37 |
+
- ta
|
38 |
+
- uk
|
39 |
+
- bo
|
40 |
+
- kk
|
41 |
+
- mn
|
42 |
+
- ug
|
43 |
+
- yue
|
44 |
---
|
45 |
|
46 |
+
# Hunyuan-MT-Chimera-7B-fp8: Multilingual Translation Model
|
47 |
+
|
48 |
+
This repository contains the `Hunyuan-MT-Chimera-7B-fp8` model, as presented in the paper [Hunyuan-MT Technical Report](https://huggingface.co/papers/2509.05209).
|
49 |
|
50 |
<p align="center">
|
51 |
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
|
|
|
84 |
<br>
|
85 |
|
86 |
|
87 |
+
|
88 |
+
|
89 |
+
## Performance
|
90 |
+
|
91 |
+
<div align='center'>
|
92 |
+
<img src="https://github.com/Tencent-Hunyuan/Hunyuan-MT/raw/main/imgs/overall_performance.png" width = "80%" />
|
93 |
+
</div>
|
94 |
+
You can refer to our technical report for more experimental results and analysis.
|
95 |
+
|
96 |
+
<a href="https://www.arxiv.org/pdf/2509.05209"><b>Technical Report</b> </a>
|
97 |
+
|
98 |
|
99 |
|
100 |
## 模型链接
|
|
|
154 |
pip install transformers==4.56.0
|
155 |
```
|
156 |
|
|
|
|
|
157 |
*!!! If you want to load fp8 model with transformers, you need to change the name"ignored_layers" in config.json to "ignore" and upgrade the compressed-tensors to compressed-tensors-0.11.0.*
|
158 |
|
159 |
+
The following code snippet shows how to use the transformers library to load and apply the model.
|
160 |
+
|
161 |
we use tencent/Hunyuan-MT-7B for example
|
162 |
|
163 |
```python
|
|
|
169 |
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
|
170 |
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
|
171 |
messages = [
|
172 |
+
{"role": "user", "content": "Translate the following segment into Chinese, without additional explanation.
|
173 |
+
|
174 |
+
It’s on the house."},
|
175 |
]
|
176 |
tokenized_chat = tokenizer.apply_chat_template(
|
177 |
messages,
|
|
|
237 |
| Uyghur | ug | 维吾尔语 |
|
238 |
| Cantonese | yue | 粤语 |
|
239 |
|
240 |
+
### Training Data Format
|
241 |
+
|
242 |
+
If you need to fine-tune our Instruct model, we recommend processing the data into the following format.
|
243 |
+
|
244 |
+
```python
|
245 |
+
|
246 |
+
messages = [
|
247 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
248 |
+
{"role": "user", "content": "Why is seawater salty?" },
|
249 |
+
{"role": "assistant", "content": "Seawater is primarily saline due to dissolved salts and minerals. These substances come from the chemical materials in rocks and soil on the Earth's surface, which are carried into the ocean over time. When seawater evaporates, the water vapor leaves, but the salts and minerals remain, making the seawater saltier. Therefore, the salinity of seawater is determined by the amount of salts and minerals it contains."}
|
250 |
+
]
|
251 |
+
|
252 |
+
from transformers import AutoTokenizer
|
253 |
+
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
|
254 |
+
train_ids = tokenizer.apply_chat_template(messages)
|
255 |
+
```
|
256 |
+
|
257 |
+
|
258 |
+
|
259 |
+
### Train with LLaMA-Factory
|
260 |
+
|
261 |
+
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
|
262 |
+
|
263 |
+
#### Prerequisites
|
264 |
+
|
265 |
+
Verify installation of the following dependencies:
|
266 |
+
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
|
267 |
+
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
|
268 |
+
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
|
269 |
+
```
|
270 |
+
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
|
271 |
+
```
|
272 |
+
|
273 |
+
#### Data preparation
|
274 |
+
|
275 |
+
We need to prepare a custom dataset:
|
276 |
+
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
|
277 |
+
```
|
278 |
+
[
|
279 |
+
{
|
280 |
+
"messages": [
|
281 |
+
{
|
282 |
+
"role": "system",
|
283 |
+
"content": "System prompt (optional)"
|
284 |
+
},
|
285 |
+
{
|
286 |
+
"role": "user",
|
287 |
+
"content": "Human instruction"
|
288 |
+
},
|
289 |
+
{
|
290 |
+
"role": "assistant",
|
291 |
+
"content": "Model response"
|
292 |
+
}
|
293 |
+
]
|
294 |
+
}
|
295 |
+
]
|
296 |
+
```
|
297 |
+
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
|
298 |
+
|
299 |
+
2. Define your dataset in the data/dataset_info.json file using the following format:
|
300 |
+
```
|
301 |
+
"dataset_name": {
|
302 |
+
"file_name": "dataset.json",
|
303 |
+
"formatting": "sharegpt",
|
304 |
+
"columns": {
|
305 |
+
"messages": "messages"
|
306 |
+
},
|
307 |
+
"tags": {
|
308 |
+
"role_tag": "role",
|
309 |
+
"content_tag": "content",
|
310 |
+
"user_tag": "user",
|
311 |
+
"assistant_tag": "assistant",
|
312 |
+
"system_tag": "system"
|
313 |
+
}
|
314 |
+
}
|
315 |
+
```
|
316 |
+
|
317 |
+
#### Training execution
|
318 |
+
|
319 |
+
1. Copy all files from the `llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
|
320 |
+
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
|
321 |
+
```
|
322 |
+
### model
|
323 |
+
model_name_or_path: [!!!add the model path here!!!]
|
324 |
+
|
325 |
+
### dataset
|
326 |
+
dataset: [!!!add the dataset name here!!!]
|
327 |
+
```
|
328 |
+
3. Execute training commands:
|
329 |
+
*Single-node training
|
330 |
+
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
|
331 |
+
```
|
332 |
+
export DISABLE_VERSION_CHECK=1
|
333 |
+
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
|
334 |
+
```
|
335 |
+
*Multi-node training
|
336 |
+
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
|
337 |
+
```
|
338 |
+
export DISABLE_VERSION_CHECK=1
|
339 |
+
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
|
340 |
+
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
|
341 |
+
```
|
342 |
+
|
343 |
+
|
344 |
+
|
345 |
+
|
346 |
+
## Quantization Compression
|
347 |
+
We used our own [AngelSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngelSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
|
348 |
+
|
349 |
+
### FP8 Quantization
|
350 |
+
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngelSlim quantization, you can also directly download our quantization completed open source model to use [AngelSlim](https://huggingface.co/AngelSlim).
|
351 |
+
|
352 |
+
|
353 |
+
## Deployment
|
354 |
+
|
355 |
+
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
|
356 |
+
|
357 |
+
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
|
358 |
+
|
359 |
+
|
360 |
+
### TensorRT-LLM
|
361 |
+
|
362 |
+
#### Docker Image
|
363 |
+
|
364 |
+
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
|
365 |
+
|
366 |
+
We use tencent/Hunyuan-7B-Instruct for example
|
367 |
+
- To get started:
|
368 |
+
|
369 |
+
```
|
370 |
+
docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-7b:hunyuan-7b-trtllm
|
371 |
+
```
|
372 |
+
```
|
373 |
+
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7b:hunyuan-7b-trtllm
|
374 |
+
```
|
375 |
+
|
376 |
+
- Prepare Configuration file:
|
377 |
+
|
378 |
+
```
|
379 |
+
cat >/path/to/extra-llm-api-config.yml <<EOF
|
380 |
+
use_cuda_graph: true
|
381 |
+
cuda_graph_padding_enabled: true
|
382 |
+
cuda_graph_batch_sizes:
|
383 |
+
- 1
|
384 |
+
- 2
|
385 |
+
- 4
|
386 |
+
- 8
|
387 |
+
- 16
|
388 |
+
- 32
|
389 |
+
print_iter_log: true
|
390 |
+
EOF
|
391 |
+
```
|
392 |
+
|
393 |
+
|
394 |
+
- Start the API server:
|
395 |
+
|
396 |
+
|
397 |
+
```
|
398 |
+
trtllm-serve \
|
399 |
+
/path/to/HunYuan-7b \
|
400 |
+
--host localhost \
|
401 |
+
--port 8000 \
|
402 |
+
--backend pytorch \
|
403 |
+
--max_batch_size 32 \
|
404 |
+
--max_num_tokens 16384 \
|
405 |
+
--tp_size 2 \
|
406 |
+
--kv_cache_free_gpu_memory_fraction 0.6 \
|
407 |
+
--trust_remote_code \
|
408 |
+
--extra_llm_api_options /path/to/extra-llm-api-config.yml
|
409 |
+
```
|
410 |
+
|
411 |
+
|
412 |
+
### vllm
|
413 |
+
|
414 |
+
#### Start
|
415 |
+
Please use vLLM version v0.10.0 or higher for inference.
|
416 |
+
|
417 |
+
First, please install transformers. We will merge it into the main branch later.
|
418 |
+
```SHELL
|
419 |
+
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
|
420 |
+
```
|
421 |
+
|
422 |
+
We use tencent/Hunyuan-7B-Instruct for example
|
423 |
+
- Download Model file:
|
424 |
+
- Huggingface: will download automicly by vllm.
|
425 |
+
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
|
426 |
+
|
427 |
+
- model download by huggingface:
|
428 |
+
```shell
|
429 |
+
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
|
430 |
+
```
|
431 |
+
|
432 |
+
- model downloaded by modelscope:
|
433 |
+
```shell
|
434 |
+
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
|
435 |
+
```
|
436 |
+
|
437 |
+
- Start the API server:
|
438 |
+
|
439 |
+
```shell
|
440 |
+
python3 -m vllm.entrypoints.openai.api_server \
|
441 |
+
--host 0.0.0.0 \
|
442 |
+
--port 8000 \
|
443 |
+
--trust-remote-code \
|
444 |
+
--model ${MODEL_PATH} \
|
445 |
+
--tensor-parallel-size 1 \
|
446 |
+
--dtype bfloat16 \
|
447 |
+
--quantization experts_int8 \
|
448 |
+
--served-model-name hunyuan \
|
449 |
+
2>&1 | tee log_server.txt
|
450 |
+
```
|
451 |
+
- After running service script successfully, run the request script
|
452 |
+
```shell
|
453 |
+
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
|
454 |
+
"model": "hunyuan",
|
455 |
+
"messages": [
|
456 |
+
{
|
457 |
+
"role": "system",
|
458 |
+
"content": [{"type": "text", "text": "You are a helpful assistant."}]
|
459 |
+
},
|
460 |
+
{
|
461 |
+
"role": "user",
|
462 |
+
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
|
463 |
+
}
|
464 |
+
],
|
465 |
+
"max_tokens": 2048,
|
466 |
+
"temperature":0.7,
|
467 |
+
"top_p": 0.6,
|
468 |
+
"top_k": 20,
|
469 |
+
"repetition_penalty": 1.05,
|
470 |
+
"stop_token_ids": [127960]
|
471 |
+
}'
|
472 |
+
```
|
473 |
+
#### Quantitative model deployment
|
474 |
+
This section describes the process of deploying a post-quantization model using vLLM.
|
475 |
+
|
476 |
+
Default server in BF16.
|
477 |
+
|
478 |
+
##### Int8 quantitative model deployment
|
479 |
+
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
|
480 |
+
|
481 |
+
Next we start the Int8 service. Run:
|
482 |
+
```shell
|
483 |
+
python3 -m vllm.entrypoints.openai.api_server \
|
484 |
+
--host 0.0.0.0 \
|
485 |
+
--port 8000 \
|
486 |
+
--trust-remote-code \
|
487 |
+
--model ${MODEL_PATH} \
|
488 |
+
--tensor-parallel-size 1 \
|
489 |
+
--dtype bfloat16 \
|
490 |
+
--served-model-name hunyuan \
|
491 |
+
--quantization experts_int8 \
|
492 |
+
2>&1 | tee log_server.txt
|
493 |
+
```
|
494 |
+
|
495 |
+
|
496 |
+
##### Int4 quantitative model deployment
|
497 |
+
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
|
498 |
+
```shell
|
499 |
+
export MODEL_PATH=PATH_TO_INT4_MODEL
|
500 |
+
```
|
501 |
+
Next we start the Int4 service. Run
|
502 |
+
```shell
|
503 |
+
python3 -m vllm.entrypoints.openai.api_server \
|
504 |
+
--host 0.0.0.0 \
|
505 |
+
--port 8000 \
|
506 |
+
--trust-remote-code \
|
507 |
+
--model ${MODEL_PATH} \
|
508 |
+
--tensor-parallel-size 1 \
|
509 |
+
--dtype bfloat16 \
|
510 |
+
--served-model-name hunyuan \
|
511 |
+
--quantization gptq_marlin \
|
512 |
+
2>&1 | tee log_server.txt
|
513 |
+
```
|
514 |
+
|
515 |
+
##### FP8 quantitative model deployment
|
516 |
+
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
|
517 |
+
|
518 |
+
|
519 |
+
Next we start the FP8 service. Run
|
520 |
+
```shell
|
521 |
+
python3 -m vllm.entrypoints.openai.api_server \
|
522 |
+
--host 0.0.0.0 \
|
523 |
+
--port 8000 \
|
524 |
+
--trust-remote-code \
|
525 |
+
--model ${MODEL_PATH} \
|
526 |
+
--tensor-parallel-size 1 \
|
527 |
+
--dtype bfloat16 \
|
528 |
+
--served-model-name hunyuan \
|
529 |
+
--kv-cache-dtype fp8 \
|
530 |
+
2>&1 | tee log_server.txt
|
531 |
+
```
|
532 |
+
|
533 |
+
|
534 |
+
|
535 |
+
|
536 |
+
### SGLang
|
537 |
+
|
538 |
+
#### Docker Image
|
539 |
+
|
540 |
+
We also provide a pre-built Docker image based on the latest version of SGLang.
|
541 |
+
|
542 |
+
We use tencent/Hunyuan-7B-Instruct for example
|
543 |
+
|
544 |
+
To get started:
|
545 |
+
|
546 |
+
- Pull the Docker image
|
547 |
+
|
548 |
+
```
|
549 |
+
docker pull lmsysorg/sglang:latest
|
550 |
+
```
|
551 |
+
|
552 |
+
- Start the API server:
|
553 |
+
|
554 |
+
```
|
555 |
+
docker run --entrypoint="python3" --gpus all \
|
556 |
+
--shm-size 32g \
|
557 |
+
-p 30000:30000 \
|
558 |
+
--ulimit nproc=10000 \
|
559 |
+
--privileged \
|
560 |
+
--ipc=host \
|
561 |
+
lmsysorg/sglang:latest \
|
562 |
+
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
|
563 |
+
```
|
564 |
+
|
565 |
Citing Hunyuan-MT:
|
566 |
|
567 |
```bibtex
|
|
|
571 |
howpublished={\url{https://github.com/Tencent-Hunyuan/Hunyuan-MT}},
|
572 |
year={2025}
|
573 |
}
|
574 |
+
```
|
575 |
+
|
576 |
+
## Contact Us
|
577 |
+
|
578 |
+
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|