A Practical Two-Stage Recipe for Mathematical LLMs: Maximizing Accuracy with SFT and Efficiency with Reinforcement Learning
Abstract
A combination of extended supervised fine-tuning and reinforcement learning from online inference enhances the mathematical reasoning capabilities of large language models, achieving top-tier performance on benchmarks like the AI Mathematical Olympiad.
Enhancing the mathematical reasoning of Large Language Models (LLMs) is a pivotal challenge in advancing AI capabilities. While Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) are the dominant training paradigms, a systematic methodology for combining them to maximize both accuracy and efficiency remains largely unexplored. This paper introduces a practical and effective training recipe that strategically integrates extended SFT with RL from online inference (GRPO). We posit that these methods play complementary, not competing, roles: a prolonged SFT phase first pushes the model's accuracy to its limits, after which a GRPO phase dramatically improves token efficiency while preserving this peak performance. Our experiments reveal that extending SFT for as many as 10 epochs is crucial for performance breakthroughs, and that the primary role of GRPO in this framework is to optimize solution length. The efficacy of our recipe is rigorously validated through top-tier performance on challenging benchmarks, including a high rank among over 2,200 teams in the strictly leak-free AI Mathematical Olympiad (AIMO). This work provides the community with a battle-tested blueprint for developing state-of-the-art mathematical reasoners that are both exceptionally accurate and practically efficient. To ensure full reproducibility and empower future research, we will open-source our entire framework, including all code, model checkpoints, and training configurations at https://github.com/analokmaus/kaggle-aimo2-fast-math-r1.
Community
Paper from Our Kaggle AIMO 2 Work
We are happy to share a new paper based on our experience in the AI Mathematical Olympiad 2 (AIMO 2) competition hosted at Kaggle.
What we learned
- Longer SFT helps Training the same data for 10 epochs of supervised fine-tuning (SFT) raised mathematical accuracy.
- GRPO adds efficiency After SFT, GRPO (online RL) kept accuracy while making solutions shorter.
Why it matters
AIMO 2 is a strictly leak-free benchmark, so improving accuracy is hard. More than 2,200 teams joined, and many struggled. Because our method worked under these tough conditions, we believe it can be useful in practice.
We hope our open resources will help others build better mathematical reasoners. Feedback is very welcome.
Code & checkpoints: https://github.com/analokmaus/kaggle-aimo2fast-math-r1
Models citing this paper 3
Datasets citing this paper 3
Spaces citing this paper 0
No Space linking this paper