Safetensors
qwen2_5_vl

Training Multimodal Reward Model Through Stable Reinforcement Learning

๐Ÿ”ฅ We are proud to open-source R1-Reward, a comprehensive project for improve reward modeling through reinforcement learning. This release includes:

  • R1-Reward Model: A state-of-the-art (SOTA) multimodal reward model demonstrating substantial gains (Voting@15):
    • 13.5% improvement on VL Reward-Bench.
    • 3.5% improvement on MM-RLHF Reward-Bench.
    • 14.6% improvement on Multimodal Reward Bench.
  • StableReinforce Algorithm: A novel reinforcement learning method that enhances the Reinforce++ approach by improving training loss stability, advantage estimation, and reward function design.
  • Open-Source Resources: We provide the R1-Reward model, the R1-Reward RL training dataset, and inference code for IXC-Reward๏ผŒMM-RLHF Reward and R1-Reward on the three benchmarks in Figure 1.

image/png

Citation

If you find it useful for your research and applications, please cite related papers/blogs using this BibTeX:

@article{zhang2025r1,
  title={R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning},
  author={Zhang, Yi-Fan and Lu, Xingyu and Hu, Xiao and Fu, Chaoyou and Wen, Bin and Zhang, Tianke and Liu, Changyi and Jiang, Kaiyu and Chen, Kaibing and Tang, Kaiyu and others},
  journal={arXiv preprint arXiv:2505.02835},
  year={2025}
}

Related Projects

Downloads last month
53
Safetensors
Model size
8.29B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for yifanzhang114/R1-Reward

Quantizations
2 models

Collection including yifanzhang114/R1-Reward