Model Prefixes

"translate Russian to Sakha: " - Ru-sah
"translate Sakha to Russian: " - sah-Ru

How to Get Started with the Model

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model = AutoModelForSeq2SeqLM.from_pretrained("lab-ii/mt5-yakut")
tokenizer = AutoTokenizer.from_pretrained("lab-ii/mt5-yakut")

def predict(text, prefix, a=32, b=3, max_input_length=1024, num_beams=3, **kwargs):
    inputs = tokenizer(prefix + text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length)
    result = model.generate(
        **inputs.to(model.device),
        max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
        num_beams=num_beams,
        **kwargs
    )
    return tokenizer.batch_decode(result, skip_special_tokens=True)

sentence: str = "Фотограф опубликовал снимки с прошедшего феста."

translation = predict(sentence, prefix="translate Russian to Sakha: ")

print(translation)

# ['Бэрэограф ааспыт фесттан хаартыскалары ыытан көрдөрбүт.']
Downloads last month
19
Safetensors
Model size
601M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lab-ii/mt5-yakut

Base model

google/mt5-base
Finetuned
(200)
this model

Dataset used to train lab-ii/mt5-yakut