Safetensors
qwen2_5_vl
yifanzhang114 commited on
Commit
d1b5bb4
·
verified ·
1 Parent(s): d3c6d9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -3
README.md CHANGED
@@ -1,3 +1,49 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+
6
+ <p align="center">
7
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/q3Anm7o-MoNYjB8JztGVT.png" width="60%" />
8
+ </p>
9
+
10
+ <font size=3><div align='center' >
11
+ [[📖 arXiv Paper](https://arxiv.org/abs/2502.10391)]
12
+ [[📊 R1-Reward Code](https://github.com/yfzhang114/r1_reward)]
13
+ [[📝 R1-Reward Data](https://huggingface.co/datasets/yifanzhang114/R1-Reward-RL)]
14
+ </div></font>
15
+
16
+ # Training Multimodal Reward Model Through Stable Reinforcement Learning
17
+
18
+ 🔥 We are proud to open-source **R1-Reward**, a comprehensive project for improve reward modeling through reinforcement learning. This release includes:
19
+
20
+ * **R1-Reward Model:** A state-of-the-art (SOTA) multimodal reward model demonstrating substantial gains (Voting@15):
21
+ * **13.5%** improvement on VL Reward-Bench.
22
+ * **3.5%** improvement on MM-RLHF Reward-Bench.
23
+ * **14.6%** improvement on Multimodal Reward Bench.
24
+ * **StableReinforce Algorithm:** A novel reinforcement learning method that enhances the Reinforce++ approach by improving training loss stability, advantage estimation, and reward function design.
25
+ * **Open-Source Resources:** We provide the R1-Reward model, the R1-Reward RL training dataset, and inference code for IXC-Reward,MM-RLHF Reward and R1-Reward on the three benchmarks in Figure 1.
26
+
27
+
28
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/yW7YWlxhsbLOaX927uG99.png)
29
+
30
+
31
+ ## Citation
32
+
33
+ If you find it useful for your research and applications, please cite related papers/blogs using this BibTeX:
34
+ ```bibtex
35
+ @article{zhang2025mm,
36
+ title={MM-RLHF: The Next Step Forward in Multimodal LLM Alignment},
37
+ author={Zhang, Yi-Fan and Yu, Tao and Tian, Haochen and Fu, Chaoyou and Li, Peiyan and Zeng, Jianshu and Xie, Wulin and Shi, Yang and Zhang, Huanyu and Wu, Junkang and others},
38
+ journal={arXiv preprint arXiv:2502.10391},
39
+ year={2025}
40
+ }
41
+ ```
42
+
43
+ ## Related Projects
44
+ - [MM-RLHF: The Next Step Forward in Multimodal LLM Alignment](https://mm-rlhf.github.io/)
45
+ - [MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?](https://github.com/yfzhang114/MME-RealWorld)
46
+ - [MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs](https://arxiv.org/abs/2411.15296)
47
+ - [Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models](https://github.com/yfzhang114/SliME)
48
+ - [VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction](https://github.com/VITA-MLLM/VITA)
49
+