Papers
arxiv:2506.07597

Instructing Large Language Models for Low-Resource Languages: A Systematic Study for Basque

Published on Jun 9
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

Using instruction-tuned models as backbones and synthetic instructions enables effective language adaptation for low-resource languages, yielding high-quality results comparable to larger models.

AI-generated summary

Instructing language models with user intent requires large instruction datasets, which are only available for a limited set of languages. In this paper, we explore alternatives to conventional instruction adaptation pipelines in low-resource scenarios. We assume a realistic scenario for low-resource languages, where only the following are available: corpora in the target language, existing open-weight multilingual base and instructed backbone LLMs, and synthetically generated instructions sampled from the instructed backbone. We present a comprehensive set of experiments for Basque that systematically study different combinations of these components evaluated on benchmarks and human preferences from 1,680 participants. Our conclusions show that target language corpora are essential, with synthetic instructions yielding robust models, and, most importantly, that using as backbone an instruction-tuned model outperforms using a base non-instructed model, and improved results when scaling up. Using Llama 3.1 instruct 70B as backbone our model comes near frontier models of much larger sizes for Basque, without using any Basque data apart from the 1.2B word corpora. We release code, models, instruction datasets, and human preferences to support full reproducibility in future research on low-resource language adaptation.

Community

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 8

Browse 8 datasets citing this paper

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.07597 in a Space README.md to link it from this page.

Collections including this paper 1