Update README.md
Browse files
README.md
CHANGED
@@ -56,8 +56,6 @@ tags:
|
|
56 |
|
57 |
`GemmaX2-28-2B-v0.1` is designed for multilingual machine translation, built on `GemmaX2-28-2B-Pretrain`, which was pretrained on a mix of monolingual and parallel data (56 billion tokens) across 28 languages. The finetuning process used a small, high-quality set of translation instruction data to enhance its performance. These GGUF quantizations were generated using `convert_hf_to_gguf.py`, converting the original Hugging Face model into formats compatible with tools like `llama.cpp` for efficient deployment.
|
58 |
|
59 |
-
**Languages:** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, Polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
|
60 |
-
|
61 |
### Quantization Details
|
62 |
- **Source Model**: `ModelSpace/GemmaX2-28-2B-v0.1`
|
63 |
- **Conversion Tool**: `convert_hf_to_gguf.py`
|
|
|
56 |
|
57 |
`GemmaX2-28-2B-v0.1` is designed for multilingual machine translation, built on `GemmaX2-28-2B-Pretrain`, which was pretrained on a mix of monolingual and parallel data (56 billion tokens) across 28 languages. The finetuning process used a small, high-quality set of translation instruction data to enhance its performance. These GGUF quantizations were generated using `convert_hf_to_gguf.py`, converting the original Hugging Face model into formats compatible with tools like `llama.cpp` for efficient deployment.
|
58 |
|
|
|
|
|
59 |
### Quantization Details
|
60 |
- **Source Model**: `ModelSpace/GemmaX2-28-2B-v0.1`
|
61 |
- **Conversion Tool**: `convert_hf_to_gguf.py`
|