metadata
license: llama3.1
datasets:
- allenai/MADLAD-400
language:
- te
base_model:
- meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
Llama 3.1 8B Instruct for Telugu: Continual pre-training only
This model is built on top of Llama 3.1 8B Instruct adapted for Telugu using 500M target language tokens sampled from MADLAD-400.
Model Details
- Vocabulary: This model has no additional target vocabulary. It retains the original vocabulary of Llama 3.1 8B Instruct.
- Training: This model was continually pre-trained on 500M target language tokens sampled from MADLAD-400.
Model Description
- Language: Telugu
- License: Llama 3.1 Community License Agreement
- Fine-tuned from model: meta-llama/Llama-3.1-8B-Instruct
Model Sources
- Repository: https://github.com/gucci-j/chat-cve
- Paper: https://arxiv.org/abs/2412.11704
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Llama-3.1-8B-Instruct-te-lapt-madlad"
)
tokenizer = AutoTokenizer.from_pretrained(
"meta-llama/Llama-3.1-8B-Instruct"
)
Citation
@misc{yamaguchi2024vocabularyexpansionchatmodels,
title={{ElChat}: Adapting Chat Language Models Using Only Target Unlabeled Language Data},
author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
year={2024},
eprint={2412.11704},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.11704},
}