No Prompt Left Behind: Exploiting Zero-Variance Prompts in LLM Reinforcement Learning via Entropy-Guided Advantage Shaping
Abstract
RL-ZVP, a novel reinforcement learning algorithm, leverages zero-variance prompts to improve the accuracy and pass rate of Large Language Models in math reasoning tasks.
Reinforcement Learning with Verifiable Rewards (RLVR) is a powerful framework for improving the reasoning abilities of Large Language Models (LLMs). However, current methods such as GRPO rely only on problems where the model responses to the same input differ in correctness, while ignoring those where all responses receive the same reward - so-called zero-variance prompts. In this work, we argue that such prompts are not useless but can, in fact, provide meaningful feedback for policy optimization. To this end, we introduce RL with Zero-Variance Prompts (RL-ZVP), a novel algorithm that extract learning signals from zero-variance prompts. RL-ZVP directly rewards correctness and penalizes errors even without contrasting responses, modulating feedback with token-level characteristics to preserve informative, nuanced signals. Across six math reasoning benchmarks, RL-ZVP achieves significant improvements of up to 8.61 points in accuracy and 7.77 points in pass rate over GRPO, while consistently outperforming other baselines that filter out zero-variance prompts. These results highlight the untapped potential of learning from zero-variance prompts in RLVR.
Community
We propose exploiting zero-variance prompts for additional learning signals in RLVR (GRPO) pipeline.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beyond Pass@1: Self-Play with Variational Problem Synthesis Sustains RLVR (2025)
- COPO: Consistency-Aware Policy Optimization (2025)
- Beyond Correctness: Harmonizing Process and Outcome Rewards through RL Training (2025)
- Implicit Actor Critic Coupling via a Supervised Learning Framework for RLVR (2025)
- DCPO: Dynamic Clipping Policy Optimization (2025)
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources (2025)
- ConfClip: Confidence-Weighted and Clipped Reward for Reinforcement Learning in LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper