Malaysian Qwen 2.5 0.5B Instruct

Continue finetuning https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.

Improvement

  1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
  2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
  3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.

Training session

Finetune on mesolitica/Malaysian-SFT to make the model understand Malaysian context.

How we train

  1. LoRA on ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"].
  2. 128 Rank with alpha 256, or alpha of 2.0
  3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
  4. Chunk CCE loss for LoRA.
  5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-qwen2.5-0.5b-malaysian-8k

Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5

Benchmark

MalayMMLU

Probability next tokens

Based on 0-shot official MalayMMLU First token accuracy,

                             Model   Accuracy   shot by_letter        category
0  Malaysian-Qwen2.5-0.5B-Instruct  51.166598  0shot      True            STEM
1  Malaysian-Qwen2.5-0.5B-Instruct  50.890585  0shot      True        Language
2  Malaysian-Qwen2.5-0.5B-Instruct  48.944782  0shot      True  Social science
3  Malaysian-Qwen2.5-0.5B-Instruct  49.556249  0shot      True          Others
4  Malaysian-Qwen2.5-0.5B-Instruct  53.060296  0shot      True      Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Qwen2.5-0.5B-Instruct
Metric : first
Shot : 0shot
average accuracy 50.52657663238756
accuracy for STEM 51.1665984445354
accuracy for Language 50.89058524173028
accuracy for Social science 48.94478172882336
accuracy for Others 49.55624850083953
accuracy for Humanities 53.06029579067122

While the original model,

                   Model   Accuracy   shot by_letter        category
0  Qwen2.5-0.5B-Instruct  48.260336  0shot      True            STEM
1  Qwen2.5-0.5B-Instruct  45.117684  0shot      True        Language
2  Qwen2.5-0.5B-Instruct  45.692397  0shot      True  Social science
3  Qwen2.5-0.5B-Instruct  46.725834  0shot      True          Others
4  Qwen2.5-0.5B-Instruct  50.079636  0shot      True      Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Qwen2.5-0.5B-Instruct
Metric : first
Shot : 0shot
average accuracy 46.77652500722752
accuracy for STEM 48.26033565288579
accuracy for Language 45.1176844783715
accuracy for Social science 45.69239664642961
accuracy for Others 46.7258335332214
accuracy for Humanities 50.07963594994311

First token match using vLLM

Based on 0-shot exact first token match using vLLM Guided Decoding,

                             Model   Accuracy  shot        category
0  Malaysian-Qwen2.5-0.5B-Instruct  47.032337     0            STEM
1  Malaysian-Qwen2.5-0.5B-Instruct  46.755725     0        Language
2  Malaysian-Qwen2.5-0.5B-Instruct  46.371784     0  Social science
3  Malaysian-Qwen2.5-0.5B-Instruct  47.325498     0          Others
4  Malaysian-Qwen2.5-0.5B-Instruct  50.420933     0      Humanities
Model : Malaysian-Qwen2.5-0.5B-Instruct
Metric : full
Shot : 0
average accuracy 47.43732705571387
accuracy for STEM 47.032337290216944
accuracy for Language 46.75572519083969
accuracy for Social science 46.37178375252963
accuracy for Others 47.325497721276086
accuracy for Humanities 50.42093287827076

While the original model,

                   Model   Accuracy  shot        category
0  Qwen2.5-0.5B-Instruct  44.412607     0            STEM
1  Qwen2.5-0.5B-Instruct  41.539440     0        Language
2  Qwen2.5-0.5B-Instruct  42.873663     0  Social science
3  Qwen2.5-0.5B-Instruct  43.391701     0          Others
4  Qwen2.5-0.5B-Instruct  46.871445     0      Humanities
Model : Qwen2.5-0.5B-Instruct
Metric : full
Shot : 0
average accuracy 43.49729484161401
accuracy for STEM 44.412607449856736
accuracy for Language 41.539440203562336
accuracy for Social science 42.873662908355016
accuracy for Others 43.39170064763732
accuracy for Humanities 46.87144482366325

Acknowledgement

Special thanks to https://www.sns.com.my for 8x H100 node!

Downloads last month
16
Safetensors
Model size
630M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mesolitica/Malaysian-Qwen2.5-0.5B-Instruct

Quantizations
1 model

Collection including mesolitica/Malaysian-Qwen2.5-0.5B-Instruct