Improve model card: Add pipeline tag, paper and project page links
Browse filesThis PR improves the model card for the Tower+ 72B model.
It adds the `pipeline_tag: text-generation` to the metadata, which ensures the model is properly categorized and discoverable on the Hugging Face Hub (e.g., at https://huggingface.co/models?pipeline_tag=text-generation).
It also adds prominent links to the paper and the project page at the beginning of the model card for easier access to relevant information.
README.md
CHANGED
@@ -1,6 +1,5 @@
|
|
1 |
---
|
2 |
base_model: Qwen/Qwen2.5-72B
|
3 |
-
license: cc-by-nc-sa-4.0
|
4 |
language:
|
5 |
- de
|
6 |
- nl
|
@@ -25,29 +24,35 @@ language:
|
|
25 |
- ro
|
26 |
- fi
|
27 |
library_name: transformers
|
|
|
|
|
28 |
---
|
29 |
|
|
|
|
|
|
|
|
|
30 |

|
31 |
|
32 |
# Model Description:
|
33 |
|
34 |
**Tower+ 72B** is build on top of Qwen 2.5 72B. The model goes through the Continuous Pretraining (CPT), Instruction Tuning (IT) and Weighted Preference Optimization (WPO). During all these stages we include parallel and multilingual data (covering 22 languages).
|
35 |
|
36 |
-
-
|
37 |
-
-
|
38 |
-
-
|
39 |
-
-
|
40 |
-
-
|
41 |
|
42 |
# Intended uses & limitations
|
43 |
|
44 |
-
Tower is intended for multilingual tasks and its specially strong on translation related tasks.
|
45 |
|
46 |
Another usecase Tower works well is for creating multilingual synthethic data (for the languages it covers). You can do this either by translating instructions and the respective answers or by asking the model to create an instruction given a document as seed data.
|
47 |
|
48 |
# Usage:
|
49 |
|
50 |
-
When using the model, make sure your prompt is formated correctly!
|
51 |
|
52 |
Also, we recommend using VLLM rather than Hugging Face.
|
53 |
|
@@ -63,7 +68,9 @@ sampling_params = SamplingParams(
|
|
63 |
max_tokens=8192,
|
64 |
)
|
65 |
llm = LLM(model="Unbabel/Tower-Plus-72B", tensor_parallel_size=4)
|
66 |
-
messages = [{"role": "user", "content": "Translate the following English source text to Portuguese (Portugal)
|
|
|
|
|
67 |
outputs = llm.chat(messages, sampling_params)
|
68 |
# Make sure your prompt_token_ids look like this
|
69 |
print (outputs[0].outputs[0].text)
|
@@ -80,7 +87,9 @@ from transformers import pipeline
|
|
80 |
|
81 |
pipe = pipeline("text-generation", model="Unbabel/Tower-Plus-72B", device_map="auto")
|
82 |
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
|
83 |
-
messages = [{"role": "user", "content": "Translate the following English source text to Portuguese (Portugal)
|
|
|
|
|
84 |
input_ids = pipe.tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True)
|
85 |
outputs = pipe(messages, max_new_tokens=256, do_sample=False)
|
86 |
print(outputs[0]["generated_text"])
|
|
|
1 |
---
|
2 |
base_model: Qwen/Qwen2.5-72B
|
|
|
3 |
language:
|
4 |
- de
|
5 |
- nl
|
|
|
24 |
- ro
|
25 |
- fi
|
26 |
library_name: transformers
|
27 |
+
license: cc-by-nc-sa-4.0
|
28 |
+
pipeline_tag: text-generation
|
29 |
---
|
30 |
|
31 |
+
This repository contains the Tower+ 72B model, as presented in the paper [Tower+: Bridging Generality and Translation Specialization in Multilingual LLMs](https://huggingface.co/papers/2506.17080).
|
32 |
+
|
33 |
+
Project Page: [https://huggingface.co/collections/Unbabel/tower-plus-6846ca452a10c0905dc03c0f](https://huggingface.co/collections/Unbabel/tower-plus-6846ca452a10c0905dc03c0f)
|
34 |
+
|
35 |

|
36 |
|
37 |
# Model Description:
|
38 |
|
39 |
**Tower+ 72B** is build on top of Qwen 2.5 72B. The model goes through the Continuous Pretraining (CPT), Instruction Tuning (IT) and Weighted Preference Optimization (WPO). During all these stages we include parallel and multilingual data (covering 22 languages).
|
40 |
|
41 |
+
- **Developed by:** Unbabel
|
42 |
+
- **Model type:** A 72B parameter model fine-tuned on a mix of _translation-related tasks_ as well as _general instruction-following_ datasets that include reasoning, code instructions, etc.
|
43 |
+
- **Languages:** German, Spanish, French, Italian, Korean, Dutch, Russian, English, Portuguese (Portugal), Portuguese (Brazilian), Spanish (Latin America), Chinese (Simplified), Chinese (Traditional), Czech, Ukrainian, Hindi, Icelandic, Japanese, Polish, Swedish, Hungarian, Romanian, Danish, Norwegian (Nynorsk), Norwegian (Bokmål), Finnish
|
44 |
+
- **License:** CC-BY-NC-4.0
|
45 |
+
- **Context Size:**: 131,072 tokens (recommended generation tokens 8192)
|
46 |
|
47 |
# Intended uses & limitations
|
48 |
|
49 |
+
Tower is intended for multilingual tasks and its specially strong on translation related tasks.
|
50 |
|
51 |
Another usecase Tower works well is for creating multilingual synthethic data (for the languages it covers). You can do this either by translating instructions and the respective answers or by asking the model to create an instruction given a document as seed data.
|
52 |
|
53 |
# Usage:
|
54 |
|
55 |
+
When using the model, make sure your prompt is formated correctly!
|
56 |
|
57 |
Also, we recommend using VLLM rather than Hugging Face.
|
58 |
|
|
|
68 |
max_tokens=8192,
|
69 |
)
|
70 |
llm = LLM(model="Unbabel/Tower-Plus-72B", tensor_parallel_size=4)
|
71 |
+
messages = [{"role": "user", "content": "Translate the following English source text to Portuguese (Portugal):
|
72 |
+
English: Hello world!
|
73 |
+
Portuguese (Portugal): "}]
|
74 |
outputs = llm.chat(messages, sampling_params)
|
75 |
# Make sure your prompt_token_ids look like this
|
76 |
print (outputs[0].outputs[0].text)
|
|
|
87 |
|
88 |
pipe = pipeline("text-generation", model="Unbabel/Tower-Plus-72B", device_map="auto")
|
89 |
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
|
90 |
+
messages = [{"role": "user", "content": "Translate the following English source text to Portuguese (Portugal):
|
91 |
+
English: Hello world!
|
92 |
+
Portuguese (Portugal): "}]
|
93 |
input_ids = pipe.tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True)
|
94 |
outputs = pipe(messages, max_new_tokens=256, do_sample=False)
|
95 |
print(outputs[0]["generated_text"])
|