Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training Paper • 2505.14681 • Published May 20 • 9
RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents Paper • 2507.03112 • Published 6 days ago • 27
Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models Paper • 2505.02847 • Published May 1 • 28
SPC: Evolving Self-Play Critic via Adversarial Games for LLM Reasoning Paper • 2504.19162 • Published Apr 27 • 17
Self-Consistency of the Internal Reward Models Improves Self-Rewarding Language Models Paper • 2502.08922 • Published Feb 13
S$^2$R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning Paper • 2502.12853 • Published Feb 18 • 29
Unveiling and Consulting Core Experts in Retrieval-Augmented MoE-based LLMs Paper • 2410.15438 • Published Oct 20, 2024