Papers
arxiv:2507.03112

RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents

Published on Jul 3
ยท Submitted by judge on Jul 9
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

An end-to-end reinforcement learning framework using simulated user emotion rewards enhances emotional intelligence in large language models while maintaining cognitive skills.

AI-generated summary

Large language models (LLMs) excel at logical and algorithmic reasoning, yet their emotional intelligence (EQ) still lags far behind their cognitive prowess. While reinforcement learning from verifiable rewards (RLVR) has advanced in other domains, its application to dialogue-especially for emotional intelligence-remains underexplored. In this work, we introduce RLVER, the first end-to-end reinforcement learning framework that leverages verifiable emotion rewards from simulated users to cultivate higher-order empathetic abilities in LLMs. Within this framework, self-consistent affective simulated users engage in dialogue rollouts and produce deterministic emotion scores during conversations, serving as reward signals to guide the LLM's learning. Fine-tuning publicly available Qwen2.5-7B-Instruct model with PPO boosts its Sentient-Benchmark score from 13.3 to 79.2 while largely preserving mathematical and coding competence. Extensive experiments reveal that: (i) RLVER consistently improves multiple dialogue capabilities; (ii) Thinking and non-thinking models show distinct trends--thinking models excel in empathy and insight, while non-thinking models favor action; (iii) GRPO often yields stable gains, while PPO can push certain capabilities to a higher ceiling; (iv) More challenging environments are not always better-moderate ones can yield stronger outcomes. Our results show that RLVER is a practical route toward emotionally intelligent and broadly capable language agents.

Community

Paper submitter

This paper introduces the first RLVR framework to boost LLM empathy, using a simulated user that turns emotional reactions into reward signals with open-sourcing code, checkpoints, and scripts to accelerate research into emotionally intelligent AI.

Paper author

Check out the code ๐Ÿ‘‡:
https://github.com/Tencent/DigitalHuman/tree/main/RLVER

Join the conversation on Twitter ๐Ÿ’ฌ:
https://x.com/tuzhaopeng/status/1940963412848398449

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.03112 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.03112 in a Space README.md to link it from this page.

Collections including this paper 1