ModelSpace commited on
Commit
39d0bc4
·
verified ·
1 Parent(s): 47dc5e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -26
README.md CHANGED
@@ -15,11 +15,9 @@ pipeline_tag: translation
15
 
16
  GemmaX2-28-2B-Pretrain is a language model that results from continual pretraining of Gemma2-2B on a mix of 56 billion tokens of monolingual and parallel data in 28 different languages — Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
17
 
18
- GemmaX2-28-2B-v0.1 is the model version of GemmaX2-28-2B-Pretrain after SFT.
19
-
20
  - **Developed by:** Xiaomi
21
- - **Model type:** A 2B parameter model base on Gemma2, we obtained GemmaX2-28-2B-Pretrain by continuing pre-training on a large amount of monolingual and parallel data. Afterward, GemmaX2-28-2B-v0.1 was derived through supervised fine-tuning on a small set of high-quality instruction data.
22
- - **Language(s) (NLP):** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
23
  - **License:** gemma
24
 
25
  ### Model Source
@@ -30,28 +28,6 @@ GemmaX2-28-2B-v0.1 is the model version of GemmaX2-28-2B-Pretrain after SFT.
30
 
31
  ![Experimental Result](main.png)
32
 
33
- ## Limitations
34
-
35
- GemmaX2-28-2B-v0.1 supports only the 28 most commonly used languages and does not guarantee powerful translation performance for other languages. Additionally, we will continue to improve GemmaX2-28-2B's translation performance, and future models will be release in due course.
36
-
37
-
38
-
39
- ## Run the model
40
-
41
- ```python
42
- from transformers import AutoModelForCausalLM, AutoTokenizer
43
-
44
- model_id = "ModelSpace/GemmaX2-28-2B-Pretrain"
45
- tokenizer = AutoTokenizer.from_pretrained(model_id)
46
-
47
- model = AutoModelForCausalLM.from_pretrained(model_id)
48
-
49
- text = "Translate this from Chinese to English:\nChinese: 我爱机器翻译\nEnglish:"
50
- inputs = tokenizer(text, return_tensors="pt")
51
-
52
- outputs = model.generate(**inputs, max_new_tokens=50)
53
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
54
- ```
55
 
56
  ### Training Data
57
 
 
15
 
16
  GemmaX2-28-2B-Pretrain is a language model that results from continual pretraining of Gemma2-2B on a mix of 56 billion tokens of monolingual and parallel data in 28 different languages — Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
17
 
 
 
18
  - **Developed by:** Xiaomi
19
+ - **Model type:** A 2B parameter model base on Gemma2-2B, we obtained GemmaX2-28-2B-Pretrain by continuing pre-training on a large amount of monolingual and parallel data.
20
+ - **Languages:** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
21
  - **License:** gemma
22
 
23
  ### Model Source
 
28
 
29
  ![Experimental Result](main.png)
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ### Training Data
33