Update app.py
Browse files
app.py
CHANGED
@@ -3,7 +3,7 @@ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
|
3 |
|
4 |
article='''
|
5 |
# Spanish Nahuatl Automatic Translation
|
6 |
-
Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the
|
7 |
|
8 |
## Motivation
|
9 |
|
|
|
3 |
|
4 |
article='''
|
5 |
# Spanish Nahuatl Automatic Translation
|
6 |
+
Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the neural machine translation task is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consist of ~16,000 and ~7,000 samples respectively. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work, we leverage the T5 text-to-text suffix training strategy to compensate for the lack of data. We first teach the multilingual model Spanish using English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.
|
7 |
|
8 |
## Motivation
|
9 |
|