Tonic commited on
Commit
15ba906
·
verified ·
1 Parent(s): f169156

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - ar
5
+ - bn
6
+ - cs
7
+ - de
8
+ - en
9
+ - es
10
+ - fa
11
+ - fr
12
+ - he
13
+ - hi
14
+ - id
15
+ - it
16
+ - ja
17
+ - km
18
+ - ko
19
+ - lo
20
+ - ms
21
+ - my
22
+ - nl
23
+ - pl
24
+ - pt
25
+ - ru
26
+ - th
27
+ - tl
28
+ - tk
29
+ - ur
30
+ - vi
31
+ - zh
32
+ base_model:
33
+ - ModelSpace/GemmaX2-28-2B-v0.1
34
+ pipeline_tag: translation
35
+ library_name: transformers
36
+ tags:
37
+ - gemma
38
+ - translation
39
+ - multilingual
40
+ - quantized
41
+ ---
42
+ # Model Card for GemmaX2-28-2B GGUF Quantizations
43
+
44
+ ## Model Overview
45
+
46
+ **GemmaX2-28-2B GGUF Quantizations** are a set of quantized variants of `GemmaX2-28-2B-v0.1`, an LLM-based translation model developed by Xiaomi. The original model was finetuned from `GemmaX2-28-2B-Pretrain`, which itself is a continually pretrained version of `Gemma2-2B` using a diverse dataset of 56 billion tokens across 28 languages. These GGUF versions (`f16`, `bf16`, `q8_0`, `tq1_0`, `tq2_0`) were created to optimize the model for efficient inference on resource-constrained environments while preserving translation capabilities.
47
+
48
+ - **Developed by**: Xiaomi (original model); quantized by Tonic
49
+ - **Model Type**: Transformer-based language model, finetuned for translation, quantized to GGUF format
50
+ - **Quantization Formats**: `f16` (16-bit float), `bf16` (bfloat16), `q8_0` (8-bit quantization), `tq1_0` (ternary quantization 1), `tq2_0` (ternary quantization 2)
51
+ - **Languages**: Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, Polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese
52
+ - **License**: [Apache 2.0]
53
+ - **Repository**: [Tonic/GemmaX2-28-2B-gguf](https://huggingface.co/Tonic/GemmaX2-28-2B-gguf)
54
+
55
+ ## Model Description
56
+
57
+ `GemmaX2-28-2B-v0.1` is designed for multilingual machine translation, built on `GemmaX2-28-2B-Pretrain`, which was pretrained on a mix of monolingual and parallel data (56 billion tokens) across 28 languages. The finetuning process used a small, high-quality set of translation instruction data to enhance its performance. These GGUF quantizations were generated using `convert_hf_to_gguf.py`, converting the original Hugging Face model into formats compatible with tools like `llama.cpp` for efficient deployment.
58
+
59
+ **Languages:** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, Polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
60
+
61
+ ### Quantization Details
62
+ - **Source Model**: `ModelSpace/GemmaX2-28-2B-v0.1`
63
+ - **Conversion Tool**: `convert_hf_to_gguf.py`
64
+ - **Quantization Types**:
65
+ - `f16`: 16-bit floating-point, minimal precision loss, larger file size (~5-7GB).
66
+ - `bf16`: Brain floating-point 16-bit, optimized for certain hardware (e.g., TPUs), similar size to `f16`.
67
+ - `q8_0`: 8-bit quantization, reduced size (~3-4GB), slight precision trade-off.
68
+ - `tq1_0`: Ternary quantization (1-bit), smallest size (~1-2GB), higher precision loss.
69
+ - `tq2_0`: Ternary quantization (2-bit variant), slightly larger than `tq1_0`, balanced size vs. quality.
70
+
71
+ ## Intended Use
72
+
73
+ These quantized models are intended for:
74
+ - **Multilingual Translation**: Translating text across the 28 supported languages.
75
+ - **Efficient Inference**: Deployment on edge devices, low-memory systems, or environments with limited compute resources using GGUF-compatible frameworks (e.g., `llama.cpp`).
76
+ - **Research**: Studying the trade-offs between quantization levels and translation performance.
77
+
78
+ ### Use Cases
79
+ - Real-time translation applications.
80
+ - Offline translation on mobile or embedded devices.
81
+ - Benchmarking quantized LLM performance in multilingual settings.
82
+
83
+ ## Model Performance
84
+
85
+ The original `GemmaX2-28-2B-v0.1` model’s performance is detailed in the paper [Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study](https://arxiv.org/abs/2502.02481). Quantization introduces varying degrees of performance trade-offs:
86
+ - **`f16` and `bf16`**: Near-identical to the original model’s accuracy, with minimal degradation.
87
+ - **`q8_0`**: Slight reduction in translation quality, still suitable for most practical applications.
88
+ - **`tq1_0` and `tq2_0`**: Noticeable quality loss, best for scenarios prioritizing speed and size over precision.
89
+
90
+ Exact metrics depend on the downstream task and dataset; users are encouraged to evaluate performance for their specific use case.
91
+
92
+ ## How to Use
93
+
94
+ ### With Transformers (Original Model)
95
+ ```python
96
+ from transformers import AutoModelForCausalLM, AutoTokenizer
97
+
98
+ model_id = "ModelSpace/GemmaX2-28-2B-v0.1"
99
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
100
+ model = AutoModelForCausalLM.from_pretrained(model_id)
101
+
102
+ text = "Translate this from Chinese to English:\nChinese: 我爱机器翻译\nEnglish:"
103
+ inputs = tokenizer(text, return_tensors="pt")
104
+ outputs = model.generate(**inputs, max_new_tokens=50)
105
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
106
+ ```
107
+
108
+ ### With GGUF (Quantized Models)
109
+ Download a GGUF file from `Tonic/GemmaX2-28-2B-gguf` and use it with a GGUF-compatible inference tool like `llama.cpp`:
110
+
111
+ ```bash
112
+ # Example with llama.cpp
113
+ git clone https://github.com/ggerganov/llama.cpp.git
114
+ cd llama.cpp
115
+ make
116
+
117
+ # Run inference with q8_0 model
118
+ ./main -m gemmax2-28-2b-q8_0.gguf -p "Translate from Chinese to English: 我爱机器翻译"
119
+ ```
120
+
121
+ Available files:
122
+ - `gemmax2-28-2b-f16.gguf`
123
+ - `gemmax2-28-2b-bf16.gguf`
124
+ - `gemmax2-28-2b-q8_0.gguf`
125
+ - `gemmax2-28-2b-tq1_0.gguf`
126
+ - `gemmax2-28-2b-tq2_0.gguf`
127
+
128
+ ## Limitations
129
+
130
+ - **Language Support**: Only supports the 28 languages listed above; performance on unsupported languages is not guaranteed.
131
+ - **Quantization Trade-offs**: Lower-bit quantizations (`tq1_0`, `tq2_0`) may degrade translation quality, especially for complex sentences or rare language pairs.
132
+ - **Hardware Compatibility**: `bf16` benefits from specific hardware support (e.g., NVIDIA Ampere GPUs, TPUs); performance may vary otherwise.
133
+ - **Future Improvements**: The original authors plan to enhance `GemmaX2-28-2B`’s translation capabilities, which may not be reflected in these quantized versions until updated.
134
+
135
+ ## Citation
136
+
137
+ For the original model:
138
+ ```bibtex
139
+ @misc{cui2025multilingualmachinetranslationopen,
140
+ title={Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study},
141
+ author={Menglong Cui and Pengzhi Gao and Wei Liu and Jian Luan and Bin Wang},
142
+ year={2025},
143
+ eprint={2502.02481},
144
+ archivePrefix={arXiv},
145
+ primaryClass={cs.CL},
146
+ url={https://arxiv.org/abs/2502.02481},
147
+ }
148
+ ```
149
+
150
+ For these quantized versions, please also credit:
151
+ - **Quantization by**: Tonic
152
+ - **Repository**: [Tonic/GemmaX2-28-2B-gguf](https://huggingface.co/Tonic/GemmaX2-28-2B-gguf)
153
+
154
+ ## Contact
155
+
156
+ For questions about the original model, refer to Xiaomi’s publication. For issues with the GGUF quantizations, contact Tonic via Hugging Face discussions at `Tonic/GemmaX2-28-2B-gguf`.