base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
datasets:
- CodeGoat24/HPD
- CodeGoat24/LiFT-HRA
- CodeGoat24/OIP
- CodeGoat24/EvalMuse
- CodeGoat24/ShareGPTVideo-DPO
- CodeGoat24/LLaVA-Critic-113k
- CodeGoat24/VideoDPO
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
UnifiedReward-qwen-7B: A Reward Model for Pref-GRPO
We are actively gathering feedback from the community to improve our models. We welcome your input and encourage you to stay updated through our repository!!
Model Summary
UnifiedReward-qwen-7b
is the first unified reward model based on Qwen/Qwen2.5-VL-7B-Instruct for multimodal understanding and generation assessment. It enables both pairwise ranking and pointwise scoring, and is notably employed for vision model preference alignment within the Pref-GRPO framework.
This model is a key component of the research presented in the paper Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning.
For further details, please refer to the following resources:
- π° Paper: Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning
- πͺ Project Page: https://codegoat24.github.io/UnifiedReward/Pref-GRPO
- π» Code: https://github.com/CodeGoat24/Pref-GRPO
- π€ Model Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-models-67c3008148c3a380d15ac63a
- π€ Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
- π Point of Contact: Yibin Wang
π Compared with Current Reward Models
Reward Model | Method | Image Generation | Image Understanding | Video Generation | Video Understanding |
---|---|---|---|---|---|
PickScore | Point | β | |||
HPS | Point | β | |||
ImageReward | Point | β | |||
LLaVA-Critic | Pair/Point | β | |||
IXC-2.5-Reward | Pair/Point | β | β | ||
VideoScore | Point | \u221a | |||
LiFT | Point | \u221a | |||
VisionReward | Point | β | \u221a | ||
VideoReward | Point | \u221a | |||
UnifiedReward (Ours) | Pair/Point | β | β | \u221a | \u221a |
Quick Start
All pair rank and point score inference codes are provided in our GitHub repository.
We take image understanding assessment as example here:
import json
import random
import torch
import tqdm
from PIL import Image
import warnings
import os
import requests # Added for image download in example
from transformers import AutoProcessor, AutoTokenizer, Qwen2_5_VLForConditionalGeneration
from qwen_vl_utils import process_vision_info
warnings.filterwarnings("ignore")
model_path = "CodeGoat24/UnifiedReward-qwen-7b"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_path)
url = "https://github.com/LLaVA-VL/blog/blob/main/2024-10-03-llava-critic/static/images/critic_img_seven.png?raw=True"
image = Image.open(requests.get(url, stream=True).raw)
prompt_text = f'Given an image and a corresponding question, please serve as an unbiased and fair judge to evaluate the quality of the answers provided by a Large Multimodal Model (LMM). Determine which answer is better and explain your reasoning with specific details. Your task is provided as follows:\
Question: [What this image presents?]\
The first response: [The image is a black and white sketch of a line that appears to be in the shape of a cross. The line is a simple and straightforward representation of the cross shape, with two straight lines intersecting at a point.]\
The second response: [This is a handwritten number seven.]\
ASSISTANT:\
'
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": prompt_text},
],
}
]
chat_input = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[chat_input],
images=image_inputs,
videos=video_inputs,
return_tensors="pt",
padding=True
).to("cuda")
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=4096)
generated_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output = processor.batch_decode(generated_trimmed, skip_special_tokens=True)[0]
print(output)
Citation
@article{Pref-GRPO&UniGenBench,
title={Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning},
author={Wang, Yibin and Li, Zhimin and Zang, Yuhang and Zhou, Yujie and Bu, Jiazi and Wang, Chunyu and Lu, Qinglin, and Jin, Cheng and Wang, Jiaqi},
journal={arXiv preprint arXiv:2508.20751},
year={2025}
}