Kurakura AI Logo


Luth-1.7B-Instruct

Luth-1.7B-Instruct is a French fine-tuned version of Qwen3-1.7B, trained on the Luth-SFT dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.

Our Evaluation, training and data scripts are available on GitHub, along with the Blog we wrote.

Model Details

Luth was trained using full fine-tuning on the Luth-SFT dataset with Axolotl. The resulting model was then merged with the base Qwen3-1.7B model. This process successfully retained the model's English capabilities while improving its performance on most selected benchmarks in both French and English.

Benchmark Results

We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a temperature=0.

Evaluation Visualizations

French Evaluation:

French Evaluation

English Evaluation:

English Evaluation

French Benchmark Scores

Benchmark Qwen3-1.7B SmolLM2-1.7B-Instruct Qwen2.5-1.5B-Instruct Luth-1.7B-Instruct
ifeval-fr 54.53 31.24 32.90 57.67
gpqa-diamond-fr 26.90 21.83 28.93 38.58
mmlu-fr 28.46 33.73 46.25 49.66
math-500-fr 60.80 11.20 32.20 64.00
arc-chall-fr 33.28 28.57 32.68 35.16
hellaswag-fr 24.86 49.58 34.34 31.93

English Benchmark Scores

Benchmark Qwen3-1.7B SmolLM2-1.7B-Instruct Qwen2.5-1.5B-Instruct Luth-1.7B-Instruct
ifeval-en 68.39 48.24 39.93 65.80
gpqa-diamond-en 31.82 24.75 30.30 31.82
mmlu-en 52.74 50.27 59.81 60.19
math-500-en 69.20 22.40 56.00 70.00
arc-chall-en 36.09 42.32 41.04 42.24
hellaswag-en 46.96 66.94 64.48 58.55

Citation

@misc{luth2025kurakurai,
  title   = {Luth-1.7B-Instruct},
  author  = {Kurakura AI Team},
  year    = {2025},
  howpublished = {\url{https://huggingface.co/kurakurai/Luth-0.6B}},
  note    = {Qwen3-1.7B fine-tuned on French datasets}
}
Downloads last month
390
GGUF
Model size
1.72B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kurakurai/Luth-1.7B-Instruct-GGUF

Finetuned
Qwen/Qwen3-1.7B
Quantized
(95)
this model

Dataset used to train kurakurai/Luth-1.7B-Instruct-GGUF

Collection including kurakurai/Luth-1.7B-Instruct-GGUF