Improve model card: Add pipeline tag, paper and project page links (#1)
Browse files- Improve model card: Add pipeline tag, paper and project page links (b7d70253bd576dc5ddbfc030178eef484225808e)
- Update README.md (4e7eb0a92ad620ee6c9d21ced151acf37d18906d)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
@@ -1,6 +1,5 @@
|
|
1 |
---
|
2 |
base_model: Qwen/Qwen2.5-72B
|
3 |
-
license: cc-by-nc-sa-4.0
|
4 |
language:
|
5 |
- de
|
6 |
- nl
|
@@ -25,29 +24,35 @@ language:
|
|
25 |
- ro
|
26 |
- fi
|
27 |
library_name: transformers
|
|
|
|
|
28 |
---
|
29 |
|
|
|
|
|
|
|
|
|
30 |

|
31 |
|
32 |
# Model Description:
|
33 |
|
34 |
**Tower+ 72B** is build on top of Qwen 2.5 72B. The model goes through the Continuous Pretraining (CPT), Instruction Tuning (IT) and Weighted Preference Optimization (WPO). During all these stages we include parallel and multilingual data (covering 22 languages).
|
35 |
|
36 |
-
-
|
37 |
-
-
|
38 |
-
-
|
39 |
-
-
|
40 |
-
-
|
41 |
|
42 |
# Intended uses & limitations
|
43 |
|
44 |
-
Tower is intended for multilingual tasks and its specially strong on translation related tasks.
|
45 |
|
46 |
Another usecase Tower works well is for creating multilingual synthethic data (for the languages it covers). You can do this either by translating instructions and the respective answers or by asking the model to create an instruction given a document as seed data.
|
47 |
|
48 |
# Usage:
|
49 |
|
50 |
-
When using the model, make sure your prompt is formated correctly!
|
51 |
|
52 |
Also, we recommend using VLLM rather than Hugging Face.
|
53 |
|
|
|
1 |
---
|
2 |
base_model: Qwen/Qwen2.5-72B
|
|
|
3 |
language:
|
4 |
- de
|
5 |
- nl
|
|
|
24 |
- ro
|
25 |
- fi
|
26 |
library_name: transformers
|
27 |
+
license: cc-by-nc-sa-4.0
|
28 |
+
pipeline_tag: text-generation
|
29 |
---
|
30 |
|
31 |
+
This repository contains the Tower+ 72B model, as presented in the paper [Tower+: Bridging Generality and Translation Specialization in Multilingual LLMs](https://huggingface.co/papers/2506.17080).
|
32 |
+
|
33 |
+
Project Page: [https://huggingface.co/collections/Unbabel/tower-plus-6846ca452a10c0905dc03c0f](https://huggingface.co/collections/Unbabel/tower-plus-6846ca452a10c0905dc03c0f)
|
34 |
+
|
35 |

|
36 |
|
37 |
# Model Description:
|
38 |
|
39 |
**Tower+ 72B** is build on top of Qwen 2.5 72B. The model goes through the Continuous Pretraining (CPT), Instruction Tuning (IT) and Weighted Preference Optimization (WPO). During all these stages we include parallel and multilingual data (covering 22 languages).
|
40 |
|
41 |
+
- **Developed by:** Unbabel
|
42 |
+
- **Model type:** A 72B parameter model fine-tuned on a mix of _translation-related tasks_ as well as _general instruction-following_ datasets that include reasoning, code instructions, etc.
|
43 |
+
- **Languages:** German, Spanish, French, Italian, Korean, Dutch, Russian, English, Portuguese (Portugal), Portuguese (Brazilian), Spanish (Latin America), Chinese (Simplified), Chinese (Traditional), Czech, Ukrainian, Hindi, Icelandic, Japanese, Polish, Swedish, Hungarian, Romanian, Danish, Norwegian (Nynorsk), Norwegian (Bokmål), Finnish
|
44 |
+
- **License:** CC-BY-NC-4.0
|
45 |
+
- **Context Size:**: 131,072 tokens (recommended generation tokens 8192)
|
46 |
|
47 |
# Intended uses & limitations
|
48 |
|
49 |
+
Tower is intended for multilingual tasks and its specially strong on translation related tasks.
|
50 |
|
51 |
Another usecase Tower works well is for creating multilingual synthethic data (for the languages it covers). You can do this either by translating instructions and the respective answers or by asking the model to create an instruction given a document as seed data.
|
52 |
|
53 |
# Usage:
|
54 |
|
55 |
+
When using the model, make sure your prompt is formated correctly!
|
56 |
|
57 |
Also, we recommend using VLLM rather than Hugging Face.
|
58 |
|