Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better
Abstract
Autoregressive Semantic Visual Reconstruction (ASVR) improves multimodal understanding by focusing on semantic reconstruction rather than raw visual appearance, enhancing performance across various benchmarks.
Typical large vision-language models (LVLMs) apply autoregressive supervision solely to textual sequences, without fully incorporating the visual modality into the learning process. This results in three key limitations: (1) an inability to utilize images without accompanying captions, (2) the risk that captions omit critical visual details, and (3) the challenge that certain vision-centric content cannot be adequately conveyed through text. As a result, current LVLMs often prioritize vision-to-language alignment while potentially overlooking fine-grained visual information. While some prior works have explored autoregressive image generation, effectively leveraging autoregressive visual supervision to enhance image understanding remains an open challenge. In this paper, we introduce Autoregressive Semantic Visual Reconstruction (ASVR), which enables joint learning of visual and textual modalities within a unified autoregressive framework. We show that autoregressively reconstructing the raw visual appearance of images does not enhance and may even impair multimodal understanding. In contrast, autoregressively reconstructing the semantic representation of images consistently improves comprehension. Notably, we find that even when models are given continuous image features as input, they can effectively reconstruct discrete semantic tokens, resulting in stable and consistent improvements across a wide range of multimodal understanding benchmarks. Our approach delivers significant performance gains across varying data scales (556k-2M) and types of LLM bacbones. Specifically, ASVR improves LLaVA-1.5 by 5% in average scores across 14 multimodal benchmarks. The code is available at https://github.com/AlenjandroWang/ASVR.
Community
š§ ASVR: Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better
(Pronounced āas-we-areā)
š¤ Motivation: Can autoregressive visual generation supervision enhance VLMs' understanding?
š Simply reconstructing raw image pixels doesnāt help multimodal understandingāand can even hurt performance.
š§± Instead, autoregressively reconstructing semantic representations leads to stronger visual-language comprehension.
š This semantic-centric approach provides consistent improvements across various benchmarks.
š ASVR demonstrates: itās not about predicting pixels, but about predicting meaning. A simple yet effective recipe for training better VLMs.
š» Code: github.com/AlenjandroWang/ASVR
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Slot-MLLM: Object-Centric Visual Tokenization for Multimodal LLM (2025)
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation (2025)
- FUSION: Fully Integration of Vision-Language Representations for Deep Cross-Modal Understanding (2025)
- VISTA: Enhancing Vision-Text Alignment in MLLMs via Cross-Modal Mutual Information Maximization (2025)
- The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer (2025)
- SSR: Enhancing Depth Perception in Vision-Language Models via Rationale-Guided Spatial Reasoning (2025)
- Generative Multimodal Pretraining with Discrete Diffusion Timestep Tokens (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper