Malaysian Qwen 2.5 3B Instruct

Continue finetuning https://huggingface.co/Qwen/Qwen2.5-3B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.

Improvement

  1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
  2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
  3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.

Training session

Finetune on mesolitica/Malaysian-SFT to make the model understand Malaysian context.

How we train

  1. LoRA on ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"].
  2. 128 Rank with alpha 256, or alpha of 2.0
  3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
  4. Chunk CCE loss for LoRA.
  5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-qwen2.5-3b-malaysian-8k?nw=nwuserhuseinzol05

Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5

Benchmark

MalayMMLU

Probability next tokens

Based on 0-shot official MalayMMLU First token accuracy,

                           Model   Accuracy   shot by_letter        category
0  Malaysian-Qwen2.5-3B-Instruct  65.124847  0shot      True            STEM
1  Malaysian-Qwen2.5-3B-Instruct  65.903308  0shot      True        Language
2  Malaysian-Qwen2.5-3B-Instruct  58.514021  0shot      True  Social science
3  Malaysian-Qwen2.5-3B-Instruct  59.678580  0shot      True          Others
4  Malaysian-Qwen2.5-3B-Instruct  63.526735  0shot      True      Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Qwen2.5-3B-Instruct
Metric : first
Shot : 0shot
average accuracy 62.21038285218684
accuracy for STEM 65.12484650020467
accuracy for Language 65.9033078880407
accuracy for Social science 58.51402139346632
accuracy for Others 59.67857999520268
accuracy for Humanities 63.52673492605233

While the original model,

                 Model   Accuracy   shot by_letter        category
0  Qwen2.5-3B-Instruct  55.218993  0shot      True            STEM
1  Qwen2.5-3B-Instruct  60.464377  0shot      True        Language
2  Qwen2.5-3B-Instruct  49.479618  0shot      True  Social science
3  Qwen2.5-3B-Instruct  50.755577  0shot      True          Others
4  Qwen2.5-3B-Instruct  57.542662  0shot      True      Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Qwen2.5-3B-Instruct
Metric : first
Shot : 0shot
average accuracy 54.59463924338166
accuracy for STEM 55.218993041342614
accuracy for Language 60.464376590330794
accuracy for Social science 49.479618386816995
accuracy for Others 50.75557687694891
accuracy for Humanities 57.54266211604096

First token match using vLLM

Based on 0-shot exact first token match using vLLM Guided Decoding,LLM,

                           Model   Accuracy  shot        category
0  Malaysian-Qwen2.5-3B-Instruct  55.505526     0            STEM
1  Malaysian-Qwen2.5-3B-Instruct  59.446565     0        Language
2  Malaysian-Qwen2.5-3B-Instruct  53.093380     0  Social science
3  Malaysian-Qwen2.5-3B-Instruct  52.866395     0          Others
4  Malaysian-Qwen2.5-3B-Instruct  54.152446     0      Humanities
Model : Malaysian-Qwen2.5-3B-Instruct
Metric : full
Shot : 0
average accuracy 55.139800933382894
accuracy for STEM 55.50552599263201
accuracy for Language 59.44656488549618
accuracy for Social science 53.09337958947673
accuracy for Others 52.86639481890142
accuracy for Humanities 54.152445961319685

While the original model,

                 Model   Accuracy  shot        category
0  Qwen2.5-3B-Instruct  51.125665     0            STEM
1  Qwen2.5-3B-Instruct  57.649491     0        Language
2  Qwen2.5-3B-Instruct  44.998554     0  Social science
3  Qwen2.5-3B-Instruct  47.637323     0          Others
4  Qwen2.5-3B-Instruct  54.357224     0      Humanities
Model : Qwen2.5-3B-Instruct
Metric : full
Shot : 0
average accuracy 51.05521827117664
accuracy for STEM 51.12566516577978
accuracy for Language 57.649491094147585
accuracy for Social science 44.99855449551894
accuracy for Others 47.637323099064524
accuracy for Humanities 54.357224118316275

Acknowledgement

Special thanks to https://www.sns.com.my for 8x H100 node!

Downloads last month
14
Safetensors
Model size
3.4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mesolitica/Malaysian-Qwen2.5-3B-Instruct

Quantizations
2 models

Collection including mesolitica/Malaysian-Qwen2.5-3B-Instruct