library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
datasets:
- kurakurai/luth-sft
language:
- fr
- en
base_model:
- LiquidAI/LFM2-700M
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- luth
Luth-LFM2-700M
Luth-LFM2-700M is a French fine-tuned version of LFM2-700M, trained on the Luth-SFT dataset. The model has improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable.
Our Evaluation, training and data scripts are available on GitHub, along with the Blog we wrote, to further detail our recipe.
Model Details
The model was trained using full fine-tuning on the Luth-SFT dataset with Axolotl. The resulting model was then merged back with LFM2-700M. This process successfully retained the model's English capabilities while improving its performance in French.
Benchmark Results
We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a temperature=0
.
French Benchmark Scores
Benchmark | LFM2-700M | Luth-LFM2-700M |
---|---|---|
ifeval-fr | 42.33 | 50.65 |
gpqa-fr | 26.55 | 28.04 |
mmlu-fr | 43.69 | 44.71 |
math-500-fr | 32.40 | 36.60 |
kholle | 39.43 | 43.33 |
arc-chall-fr | 36.18 | 36.70 |
hellaswag-fr | 41.51 | 48.25 |
English Benchmark Scores
Benchmark | LFM2-700M | Luth-LFM2-700M |
---|---|---|
ifeval-en | 65.80 | 62.48 |
gpqa-en | 26.98 | 23.17 |
mmlu-en | 50.74 | 50.45 |
math-500-en | 34.00 | 40.40 |
arc-chall-en | 38.57 | 39.25 |
hellaswag-en | 52.63 | 54.07 |
Code Example
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kurakurai/Luth-LFM2-700M")
model = AutoModelForCausalLM.from_pretrained("kurakurai/Luth-LFM2-700M")
messages = [
{"role": "user", "content": "Quelle est la capitale de la France?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(
tokenizer.decode(
outputs[0][inputs["input_ids"].shape[-1] :], skip_special_tokens=True
)
)
Citation
@misc{luthlfm2kurakurai,
title = {Luth-LFM2-700M},
author = {Kurakura AI Team},
year = {2025},
howpublished = {\url{https://huggingface.co/kurakurai/Luth-LFM2-700M}},
note = {LFM2-700M fine-tuned on French datasets}
}