LLaMA 3.2 1B – English ↔ Persian Translator
This model is a fine-tuned version of meta-llama/Llama-3.2-1B
, trained for bidirectional translation between English and Persian. It supports both:
- 🇬🇧 English → 🇮🇷 Persian
- 🇮🇷 Persian → 🇬🇧 English
Format
The model expects prompts in the following format:
### English:
The children were playing in the park.
### Persian:
or
### Persian:
کودکان در پارک بازی میکردند.
### English:
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Sheikhaei/llama-3.2-1b-en-fa-translator", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Sheikhaei/llama-3.2-1b-en-fa-translator")
prompt = """### English:
The children were playing in the park.
### Persian:
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Data
This model was fine-tuned on a custom English–Persian parallel dataset containing ~640,000 sentence pairs. The source data was collected from Tatoeba and then translated and expanded using the Gemma-3-12B model.
Evaluation
Direction | BLEU | COMET |
---|---|---|
English → Persian | 0.47 | 0.89 |
Persian → English | 0.58 | 0.91 |
License
Apache 2.0
- Downloads last month
- 235
Model tree for Sheikhaei/llama-3.2-1b-english-persian-translator
Base model
meta-llama/Llama-3.2-1B