Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning
Abstract
Reasoning abilities, especially those for solving complex math problems, are crucial components of general intelligence. Recent advances by proprietary companies, such as o-series models of OpenAI, have made remarkable progress on reasoning tasks. However, the complete technical details remain unrevealed, and the techniques that are believed certainly to be adopted are only reinforcement learning (RL) and the long chain of thoughts. This paper proposes a new RL framework, termed OREAL, to pursue the performance limit that can be achieved through Outcome REwArd-based reinforcement Learning for mathematical reasoning tasks, where only binary outcome rewards are easily accessible. We theoretically prove that behavior cloning on positive trajectories from best-of-N (BoN) sampling is sufficient to learn the KL-regularized optimal policy in binary feedback environments. This formulation further implies that the rewards of negative samples should be reshaped to ensure the gradient consistency between positive and negative samples. To alleviate the long-existing difficulties brought by sparse rewards in RL, which are even exacerbated by the partial correctness of the long chain of thought for reasoning tasks, we further apply a token-level reward model to sample important tokens in reasoning trajectories for learning. With OREAL, for the first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500 through RL, being on par with 32B models. OREAL-32B also surpasses previous 32B models trained by distillation with 95.0 pass@1 accuracy on MATH-500. Our investigation also indicates the importance of initial policy models and training queries for RL. Code, models, and data will be released to benefit future researchhttps://github.com/InternLM/OREAL.
Community
We propose OREAL, a new RL algorithm to pursue the limit of outcome reward-based RL for math reasoning, and release several models with impressive results!
- OREAL-DSR1-Distill-Qwen-7B: for the first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500, being on par with 32B models
- OREAL-7B: also obtain 91.0 pass@1 accuracy on MATH-500
- OREAL-32B: surpasses all previous 32B models with 95.0 pass@1 accuracy on MATH-500
To benefit future study, the corresponding SFT models, and RL training queries will also be released soon!
Great !!!!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Entropy-Regularized Process Reward Model (2024)
- Demystifying Long Chain-of-Thought Reasoning in LLMs (2025)
- Improving Multi-Step Reasoning Abilities of Large Language Models with Direct Advantage Policy Optimization (2024)
- Offline Reinforcement Learning for LLM Multi-Step Reasoning (2024)
- VLM-RL: A Unified Vision Language Models and Reinforcement Learning Framework for Safe Autonomous Driving (2024)
- Enhancing Online Reinforcement Learning with Meta-Learned Objective from Offline Data (2025)
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 5
Browse 5 models citing this paperDatasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper