atsuki-yamaguchi's picture
Upload README.md with huggingface_hub
8c56dca verified
metadata
license: llama3.1
datasets:
  - allenai/MADLAD-400
language:
  - ta
base_model:
  - meta-llama/Llama-3.1-8B-Instruct
  - atsuki-yamaguchi/Llama-3.1-8B-Instruct-ta-madlad-mean-tuned
library_name: transformers

Llama 3.1 8B Instruct for Tamil: ElChat (No Copy)

This model is built on top of Llama 3.1 8B Instruct adapted for Tamil using 500M target language tokens sampled from MADLAD-400. It has an additional target vocabulary of 10K. The model was trained using the ElChat method without special token weight copying.

Model Details

  • Vocabulary: This model has an additional target vocabulary of 10K.
  • Target vocabulary initialization: The target weights of the embedding and LM head were initialized using mean initialization.
  • Training: This model was continually pre-trained on 500M target language tokens sampled from MADLAD-400.
  • Post-processing: The model was post-processed using the ElChat method without special token weight copying.

Model Description

  • Language: Tamil
  • License: Llama 3.1 Community License Agreement
  • Fine-tuned from model: meta-llama/Llama-3.1-8B-Instruct

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "atsuki-yamaguchi/Llama-3.1-8B-Instruct-ta-madlad-mean-slerp0305-emb"
)
tokenizer = AutoTokenizer.from_pretrained(
    "atsuki-yamaguchi/Llama-3.1-8B-Instruct-ta-madlad-mean-slerp0305-emb"
)

Citation

@misc{yamaguchi2024vocabularyexpansionchatmodels,
      title={{ElChat}: Adapting Chat Language Models Using Only Target Unlabeled Language Data}, 
      author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
      year={2024},
      eprint={2412.11704},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.11704},
}