File size: 881 Bytes
27f635d 634e268 5aedc32 4d863f3 7b366ee 5aedc32 dc91eb2 2e9d7dd ca466a3 dc91eb2 5aedc32 dc91eb2 ca466a3 66121d4 ae5e64d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
---
language:
- it
pipeline_tag: translation
---
This is a fine-tuned version of Multilingual Bart trained on Italian in particular on the public dataset MERLIN for Grammatical Error Correction.
To initialize the model:
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("MRNH/finetuned-mbart-it-gec", output_hidden_states=True)
To generate text using the model:
tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/finetuned-mbart-it-gec", src_lang="it_IT", tgt_lang="it_IT")
input = tokenizer("I was here yesterday to studying",text_target="I was here yesterday to study", return_tensors='pt')
output = hidden_states = model.generate(input["input_ids"],attention_mask=input["attention_mask"],forced_bos_token_id=tokenizer_it.lang_code_to_id["it_IT"]) |