Papers
arxiv:2507.07999

Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology

Published on Jul 10
· Submitted by HaochenWang on Jul 11
#3 Paper of the day
Authors:
,
,
,
,
,
,
,

Abstract

TreeBench evaluates visual grounded reasoning through subtle target detection, traceable evidence, and second-order reasoning, while TreeVGR enhances this with joint localization and reasoning using reinforcement learning.

AI-generated summary

Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, just like human "thinking with images". However, no benchmark exists to evaluate these capabilities holistically. To bridge this gap, we propose TreeBench (Traceable Evidence Evaluation Benchmark), a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate eight LMM experts to manually annotate questions, candidate options, and answers for each image. After three stages of quality control, TreeBench consists of 405 challenging visual question-answering pairs, even the most advanced models struggle with this benchmark, where none of them reach 60% accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce TreeVGR (Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to supervise localization and reasoning jointly with reinforcement learning, enabling accurate localizations and explainable reasoning pathways. Initialized from Qwen2.5-VL-7B, it improves V* Bench (+16.8), MME-RealWorld (+12.6), and TreeBench (+13.4), proving traceability is key to advancing vision-grounded reasoning. The code is available at https://github.com/Haochen-Wang409/TreeVGR.

Community

Paper author Paper submitter

We propose TreeBench, the first benchmark specially designed for evaluating "thinking with images" capabilities. Unlike previous benchmarks, which only evaluate the final QA accuracy. TreeBench also evaluates localization precision. While models approach saturation (>90%) on benchmarks like V* Bench, the current state-of-the-art model, i.e., OpenAI-o3, scores only 54.87 on TreeBench, implying a large potential improvement for future works.
image.png

Moreover, we propose TreeVGR, the current state-of-the-art open-source visual grounded reasoning model. Different from previous RL approaches that solely supervise the final answer, we explicitly supervise the generated bounding boxes with a novel dual IoU reward. This reward ensures explicit accountability to human-annotated visual evidence, guiding the policy toward spatially accurate and logically coherent reasoning pathways.
image.png
image.png

This is a highly valuable piece of work.

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 3

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.07999 in a Space README.md to link it from this page.

Collections including this paper 2