For this study, we fine-tuned the base version of XLM-RoBERTa using Masked Language Modeling (MLM) to adapt it for handling transliteration and code-switching in Malayalam-English dataset. The MLM task involves randomly masking a subset of input tokens and training the model to predict these masked tokens based on their context, allowing the model to learn enriched contextual embeddings tailored to the linguistic challenges of bilingual text.
To adapt XLM-RoBERTa effectively, the MLM training dataset was constructed from three key components:
Original data: Contains monolingual text from Malayalam AI4Bharath.
Fully transliterated data: All words in the original data were transliterated into Roman script.
Partially transliterated data: A randomly selected 20% to 70% of words in each sentence were transliterated into Roman script.
This model is a Malayalam Masked Language Model (MLM) fine-tuned from the XLM-RoBERTa architecture.
Perplexity: 4.15
- Downloads last month
- 7