milmor commited on
Commit
28aaaed
·
1 Parent(s): 4504ebe

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +4 -4
app.py CHANGED
@@ -3,7 +3,7 @@ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
3
 
4
  article='''
5
  # Spanish Nahuatl Automatic Translation
6
- Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the neural machine translation task is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consist of ~16,000 and ~7,000 samples respectively. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work, we leverage the T5 text-to-text suffix training strategy to compensate for the lack of data. We first teach the multilingual model Spanish using English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.
7
 
8
  ## Motivation
9
 
@@ -54,13 +54,13 @@ Since the Axolotl corpus contains misaligments, we just select the best samples
54
  Also, to increase the amount of data, we collected 3,000 extra samples from the web.
55
 
56
  ### Model and training
57
- We employ two training-stages using a multilingual T5-small. We use this model because it can handle different vocabularies and suffixes. T5-small is pretrained on different tasks and languages (French, Romanian, English, German).
58
 
59
  ### Training-stage 1 (learning Spanish)
60
- In training stage 1 we first introduce Spanish to the model. The goal is to learn a new language rich in data (Spanish) and not lose the previous knowledge acquired. We use the English-Spanish [Anki](https://www.manythings.org/anki/) dataset, which consists of 118,964 text pairs. We train the model till convergence adding the suffix "Translate Spanish to English: ".
61
 
62
  ### Training-stage 2 (learning Nahuatl)
63
- We use the pretrained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add to our dataset 20,000 samples from the English-Spanish Anki dataset. This two-task-training avoids overfitting end makes the model more robust.
64
 
65
  ### Training setup
66
  We train the models on the same datasets for 660k steps using batch size = 16 and a learning rate of 2e-5.
 
3
 
4
  article='''
5
  # Spanish Nahuatl Automatic Translation
6
+ Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the neural machine translation task is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consist of ~16,000 and ~7,000 samples respectively. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work, we leverage the T5 text-to-text prefix training strategy to compensate for the lack of data. We first teach the multilingual model Spanish using English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.
7
 
8
  ## Motivation
9
 
 
54
  Also, to increase the amount of data, we collected 3,000 extra samples from the web.
55
 
56
  ### Model and training
57
+ We employ two training stages using a multilingual T5-small. We use this model because it can handle different vocabularies and prefixes. T5-small is pre-trained on different tasks and languages (French, Romanian, English, German).
58
 
59
  ### Training-stage 1 (learning Spanish)
60
+ In training stage 1 we first introduce Spanish to the model. The goal is to learn a new language rich in data (Spanish) and not lose the previous knowledge acquired. We use the English-Spanish [Anki](https://www.manythings.org/anki/) dataset, which consists of 118,964 text pairs. We train the model till convergence adding the prefix "Translate Spanish to English: ".
61
 
62
  ### Training-stage 2 (learning Nahuatl)
63
+ We use the pre-trained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add to our dataset 20,000 samples from the English-Spanish Anki dataset. This two-task-training avoids overfitting end makes the model more robust.
64
 
65
  ### Training setup
66
  We train the models on the same datasets for 660k steps using batch size = 16 and a learning rate of 2e-5.