metadata
license: llama3.1
datasets:
- allenai/MADLAD-400
language:
- my
base_model:
- meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
Llama 3.1 8B Instruct for Burmese: Vocabulary expansion
This model is built on top of Llama 3.1 8B Instruct adapted for Burmese using 500M target language tokens sampled from MADLAD-400. It has an additional target vocabulary of 10K.
Model Details
- Vocabulary: This model has an additional target vocabulary of 10K.
- Target vocabulary initialization: The target weights of the embedding and LM head were initialized using mean initialization.
- Training: This model was continually pre-trained on 500M target language tokens sampled from MADLAD-400.
Model Description
- Language: Burmese
- License: Llama 3.1 Community License Agreement
- Fine-tuned from model: meta-llama/Llama-3.1-8B-Instruct
Model Sources
- Repository: https://github.com/gucci-j/chat-cve
- Paper: https://arxiv.org/abs/2412.11704
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/Llama-3.1-8B-Instruct-my-madlad-mean-tuned"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/Llama-3.1-8B-Instruct-my-madlad-mean-tuned"
)
Citation
@misc{yamaguchi2024vocabularyexpansionchatmodels,
title={{ElChat}: Adapting Chat Language Models Using Only Target Unlabeled Language Data},
author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
year={2024},
eprint={2412.11704},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.11704},
}