Abstract
Reinforcement Pre-Training (RPT) improves language model accuracy through reinforcement learning and offers a scalable method for leveraging text data for general-purpose RL.
In this work, we introduce Reinforcement Pre-Training (RPT) as a new scaling paradigm for large language models and reinforcement learning (RL). Specifically, we reframe next-token prediction as a reasoning task trained using RL, where it receives verifiable rewards for correctly predicting the next token for a given context. RPT offers a scalable method to leverage vast amounts of text data for general-purpose RL, rather than relying on domain-specific annotated answers. By incentivizing the capability of next-token reasoning, RPT significantly improves the language modeling accuracy of predicting the next tokens. Moreover, RPT provides a strong pre-trained foundation for further reinforcement fine-tuning. The scaling curves show that increased training compute consistently improves the next-token prediction accuracy. The results position RPT as an effective and promising scaling paradigm to advance language model pre-training.
Community
In this work, we introduce Reinforcement Pre-Training (RPT) as a new scaling paradigm for large language models and reinforcement learning (RL). Specifically, we reframe next-token prediction as a reasoning task trained using RL, where it receives verifiable rewards for correctly predicting the next token for a given context. RPT offers a scalable method to leverage vast amounts of text data for general-purpose RL, rather than relying on domain-specific annotated answers. By incentivizing the capability of next-token reasoning, RPT significantly improves the language modeling accuracy of predicting the next tokens. Moreover, RPT provides a strong pre-trained foundation for further reinforcement fine-tuning. The scaling curves show that increased training compute consistently improves the next-token prediction accuracy. The results position RPT as an effective and promising scaling paradigm to advance language model pre-training.
Thanks for the question. According to Table 2 'Before RL' column, RPT achieves stronger performance on math problems before reinforcement finetuning.
We’ve also achieved positive results on the math datasets you mentioned. We're continuing to scale up and organize our work, and in the coming period, we’ll release evaluation results from larger-scale experiments, which will include the math datasets you're interested in.
I never thought RL could be used for pre-training
excellent paper。but i wonder the cost of training。causal mask in original gpt can increase the efficiency of pre-training. But in this work, I find that it is hard to bring in the causal mask in RPT, so won't it increase the cost of RPT?
What would happen if you applied RPT recursively - having the model reason about each token within its own reasoning chain? Would meta-reasoning about the reasoning process itself lead to even better performance, or would the computational overhead outweigh the benefits? :)
I see the paper says RPT is initialized from a reasoning model and mentions investigating RPT from a standard base LLM under Future Work. I wonder how or whether the training and thought process would be different being initialized from a base LLM instead of a reasoning model
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models (2025)
- Behavior Injection: Preparing Language Models for Reinforcement Learning (2025)
- KTAE: A Model-Free Algorithm to Key-Tokens Advantage Estimation in Mathematical Reasoning (2025)
- KDRL: Post-Training Reasoning LLMs via Unified Knowledge Distillation and Reinforcement Learning (2025)
- Do Not Let Low-Probability Tokens Over-Dominate in RL for LLMs (2025)
- Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement Learning (2025)
- Incentivizing Strong Reasoning from Weak Supervision (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
If I am not mistaken your approach doesn't allow the massively parallel scaling from standard pre training, so you shouldn't be constrained to just next token prediction.
Have you considered other RL objectives inspired by pre-training besides next token prediction? Like masked token prediction and next sentence prediction from BERT.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper