Qwen2.5-1.5B-GRPO-MATH-1EPOCH

Description:

A GRPO-fine-tuned version of Qwen2.5-1.5B trained on the MATH dataset.


Citation

@article{sha2024deepseekmath,
  title     = {DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models},
  author    = {Shao, Zhihong and Wang, Peiyi and Zhu, Qihao and Xu, Runxin and Song, Junxiao and Bi, Xiao and โ€ฆ Guo, Daya},
  journal   = {arXiv preprint arXiv:2402.03300},
  year      = {2024},
}
Downloads last month
14
Safetensors
Model size
1.78B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for back-prop/Qwen2.5-GRPO-1.5B

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(142)
this model