Improve model card: Add pipeline tag, license, project page, and enhance links
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,28 +1,21 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
|
| 4 |
-
-
|
|
|
|
| 5 |
---
|
| 6 |
|
|
|
|
| 7 |
|
| 8 |
-
|
| 9 |
-
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
|
| 10 |
-
</p><p></p>
|
| 11 |
-
|
| 12 |
|
| 13 |
<p align="center">
|
| 14 |
-
|
| 15 |
-
🤖 <a href="https://modelscope.cn/collections/Hunyuan-MT-2ca6b8e1b4934f"><b>ModelScope</b></a> |
|
| 16 |
-
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
|
| 17 |
-
</p>
|
| 18 |
-
|
| 19 |
-
<p align="center">
|
| 20 |
-
🖥️ <a href="https://hunyuan.tencent.com"><b>Official Website</b></a> |
|
| 21 |
-
🕹️ <a href="https://hunyuan.tencent.com/modelSquare/home/list"><b>Demo</b></a>
|
| 22 |
</p>
|
| 23 |
-
|
| 24 |
<p align="center">
|
| 25 |
-
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-MT"><b>
|
|
|
|
|
|
|
| 26 |
</p>
|
| 27 |
|
| 28 |
|
|
@@ -41,6 +34,16 @@ Hunyuan-MT-Chimera-7B-fp8 was produced by [AngelSlim](https://github.com/Tencent
|
|
| 41 |
* 2025.9.1 We have open-sourced **Hunyuan-MT-7B** , **Hunyuan-MT-Chimera-7B** on Hugging Face.
|
| 42 |
<br>
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
|
| 46 |
|
|
@@ -101,8 +104,6 @@ First, please install transformers, recommends v4.56.0
|
|
| 101 |
pip install transformers==4.56.0
|
| 102 |
```
|
| 103 |
|
| 104 |
-
The following code snippet shows how to use the transformers library to load and apply the model.
|
| 105 |
-
|
| 106 |
*!!! If you want to load fp8 model with transformers, you need to change the name"ignored_layers" in config.json to "ignore" and upgrade the compressed-tensors to compressed-tensors-0.11.0.*
|
| 107 |
|
| 108 |
we use tencent/Hunyuan-MT-7B for example
|
|
@@ -116,7 +117,9 @@ model_name_or_path = "tencent/Hunyuan-MT-7B"
|
|
| 116 |
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
|
| 117 |
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
|
| 118 |
messages = [
|
| 119 |
-
{"role": "user", "content": "Translate the following segment into Chinese, without additional explanation
|
|
|
|
|
|
|
| 120 |
]
|
| 121 |
tokenized_chat = tokenizer.apply_chat_template(
|
| 122 |
messages,
|
|
@@ -173,7 +176,7 @@ Supported languages:
|
|
| 173 |
| Telugu | te | 泰卢固语 |
|
| 174 |
| Marathi | mr | 马拉地语 |
|
| 175 |
| Hebrew | he | 希伯来语 |
|
| 176 |
-
| Bengali | bn |
|
| 177 |
| Tamil | ta | 泰米尔语 |
|
| 178 |
| Ukrainian | uk | 乌克兰语 |
|
| 179 |
| Tibetan | bo | 藏语 |
|
|
@@ -182,13 +185,19 @@ Supported languages:
|
|
| 182 |
| Uyghur | ug | 维吾尔语 |
|
| 183 |
| Cantonese | yue | 粤语 |
|
| 184 |
|
| 185 |
-
|
| 186 |
-
|
| 187 |
```bibtex
|
| 188 |
-
@misc{
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
|
|
|
|
|
|
|
|
|
|
| 193 |
}
|
| 194 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
+
pipeline_tag: translation
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
project_page: https://hunyuan.tencent.com
|
| 6 |
---
|
| 7 |
|
| 8 |
+
# Hunyuan-MT-Chimera-7B-fp8: Multilingual Translation Model
|
| 9 |
|
| 10 |
+
This model is presented in the paper [Hunyuan-MT Technical Report](https://huggingface.co/papers/2509.05209).
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
<p align="center">
|
| 13 |
+
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
</p>
|
|
|
|
| 15 |
<p align="center">
|
| 16 |
+
📚 <a href="https://huggingface.co/papers/2509.05209"><b>Paper</b></a> | 💻 <a href="https://github.com/Tencent-Hunyuan/Hunyuan-MT"><b>GitHub</b></a> | 🏠 <a href="https://hunyuan.tencent.com"><b>Official Website</b></a> | 🕹️ <a href="https://hunyuan.tencent.com/modelSquare/home/list"><b>Demo</b></a>
|
| 17 |
+
<br>
|
| 18 |
+
🤗 <a href="https://huggingface.co/collections/tencent/hunyuan-mt-68b42f76d473f82798882597"><b>Hugging Face Collection</b></a> | 🤖 <a href="https://modelscope.cn/collections/Hunyuan-MT-2ca6b8e1b4934f"><b>ModelScope</b></a> | 🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
|
| 19 |
</p>
|
| 20 |
|
| 21 |
|
|
|
|
| 34 |
* 2025.9.1 We have open-sourced **Hunyuan-MT-7B** , **Hunyuan-MT-Chimera-7B** on Hugging Face.
|
| 35 |
<br>
|
| 36 |
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
## Performance
|
| 40 |
+
|
| 41 |
+
<div align='center'>
|
| 42 |
+
<img src="https://github.com/Tencent-Hunyuan/Hunyuan-MT/raw/main/imgs/overall_performance.png" width = "80%" />
|
| 43 |
+
</div>
|
| 44 |
+
You can refer to our technical report for more experimental results and analysis.
|
| 45 |
+
|
| 46 |
+
[**Technical Report**](https://www.arxiv.org/pdf/2509.05209)
|
| 47 |
|
| 48 |
|
| 49 |
|
|
|
|
| 104 |
pip install transformers==4.56.0
|
| 105 |
```
|
| 106 |
|
|
|
|
|
|
|
| 107 |
*!!! If you want to load fp8 model with transformers, you need to change the name"ignored_layers" in config.json to "ignore" and upgrade the compressed-tensors to compressed-tensors-0.11.0.*
|
| 108 |
|
| 109 |
we use tencent/Hunyuan-MT-7B for example
|
|
|
|
| 117 |
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
|
| 118 |
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
|
| 119 |
messages = [
|
| 120 |
+
{"role": "user", "content": "Translate the following segment into Chinese, without additional explanation.
|
| 121 |
+
|
| 122 |
+
It’s on the house."},
|
| 123 |
]
|
| 124 |
tokenized_chat = tokenizer.apply_chat_template(
|
| 125 |
messages,
|
|
|
|
| 176 |
| Telugu | te | 泰卢固语 |
|
| 177 |
| Marathi | mr | 马拉地语 |
|
| 178 |
| Hebrew | he | 希伯来语 |
|
| 179 |
+
| Bengali | bn | 孟加拉拉语 |
|
| 180 |
| Tamil | ta | 泰米尔语 |
|
| 181 |
| Ukrainian | uk | 乌克兰语 |
|
| 182 |
| Tibetan | bo | 藏语 |
|
|
|
|
| 185 |
| Uyghur | ug | 维吾尔语 |
|
| 186 |
| Cantonese | yue | 粤语 |
|
| 187 |
|
| 188 |
+
## Citation
|
|
|
|
| 189 |
```bibtex
|
| 190 |
+
@misc{hunyuan_mt,
|
| 191 |
+
title={Hunyuan-MT Technical Report},
|
| 192 |
+
author={Mao Zheng and Zheng Li and Bingxin Qu and Mingyang Song and Yang Du and Mingrui Sun and Di Wang},
|
| 193 |
+
year={2025},
|
| 194 |
+
eprint={2509.05209},
|
| 195 |
+
archivePrefix={arXiv},
|
| 196 |
+
primaryClass={cs.CL},
|
| 197 |
+
url={https://arxiv.org/abs/2509.05209},
|
| 198 |
}
|
| 199 |
+
```
|
| 200 |
+
|
| 201 |
+
## Contact Us
|
| 202 |
+
|
| 203 |
+
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|