More Thought, Less Accuracy? On the Dual Nature of Reasoning in Vision-Language Models
Abstract
VAPO-Thinker-7B enhances multimodal reasoning by anchoring the process to visual information, improving performance on visual tasks while maintaining logical inference.
Reasoning has emerged as a pivotal capability in Large Language Models (LLMs). Through Reinforcement Learning (RL), typically Group Relative Policy Optimization (GRPO), these models are able to solve complex tasks such as mathematics and code generation. Building on these advances, recent research has sought to extend reasoning to Vision-Language Models (VLMs), yielding promising results across diverse visual tasks. Despite this progress, our study uncovers the dual nature of multimodal reasoning: while it substantially enhances logical inference and facilitates performance on challenging problems, it may gradually impair perceptual grounding, leading to recognition failures on otherwise basic visual questions. Through further analysis, we attribute this phenomenon to visual forgetting, wherein prolonged reasoning causes the model to increasingly disregard visual input. To address this, we propose Vision-Anchored Policy Optimization (VAPO), a simple yet effective method that explicitly steers the reasoning process toward visually grounded trajectories. Our result model, VAPO-Thinker-7B, significantly strengthens the model's reliance on visual information and achieves new state-of-the-art results on a wide range of established benchmarks. Project page: https://xytian1008.github.io/VAPO/
Community
A sober look at the pros and cons of multimodal reasoning with comprehensive findings, and a new RL method as a multimodal replacement of GRPO, achieving new state-of-the-art results.
Project page👉: https://xytian1008.github.io/VAPO/
Github repo👉: https://github.com/xytian1008/VAPO
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Self-Rewarding Vision-Language Model via Reasoning Decomposition (2025)
- Look Again, Think Slowly: Enhancing Visual Reflection in Vision-Language Models (2025)
- VTPerception-R1: Enhancing Multimodal Reasoning via Explicit Visual and Textual Perceptual Grounding (2025)
- Perception Before Reasoning: Two-Stage Reinforcement Learning for Visual Reasoning in Vision-Language Models (2025)
- Perception-Consistency Multimodal Large Language Models Reasoning via Caption-Regularized Policy Optimization (2025)
- Unveiling Chain of Step Reasoning for Vision-Language Models with Fine-grained Rewards (2025)
- Latent Visual Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper