MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization
Abstract
Masked Image Modeling (MIM) with Vector Quantization (VQ) has achieved great success in both self-supervised pre-training and image generation. However, most existing methods struggle to address the trade-off in shared latent space for generation quality vs. representation learning and efficiency. To push the limits of this paradigm, we propose MergeVQ, which incorporates token merging techniques into VQ-based generative models to bridge the gap between image generation and visual representation learning in a unified architecture. During pre-training, MergeVQ decouples top-k semantics from latent space with the token merge module after self-attention blocks in the encoder for subsequent Look-up Free Quantization (LFQ) and global alignment and recovers their fine-grained details through cross-attention in the decoder for reconstruction. As for the second-stage generation, we introduce MergeAR, which performs KV Cache compression for efficient raster-order prediction. Extensive experiments on ImageNet verify that MergeVQ as an AR generative model achieves competitive performance in both visual representation learning and image generation tasks while maintaining favorable token efficiency and inference speed. The code and model will be available at https://apexgen-x.github.io/MergeVQ.
Community
[CVPR 2025]๐๐๐ MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization
Welcome to upvote and thank you for your support!!!
Welcome to discuss any questions about MergeVQ with us here / on Twitter (X)!
https://x.com/ZedongWangAI/status/1907878566399979792
Welcome to open discussion about future directions of visual tokenizers and advanced auto-regressive generations~๐
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- V2Flow: Unifying Visual Tokenization and Large Language Model Vocabularies for Autoregressive Image Generation (2025)
- HiTVideo: Hierarchical Tokenizers for Enhancing Text-to-Video Generation with Autoregressive Large Language Models (2025)
- Autoregressive Image Generation with Randomized Parallel Decoding (2025)
- Harmonizing Visual Representations for Unified Multimodal Understanding and Generation (2025)
- USP: Unified Self-Supervised Pretraining for Image Generation and Understanding (2025)
- Robust Latent Matters: Boosting Image Generation with Sampling Error Synthesis (2025)
- Unleashing the Potential of Large Language Models for Text-to-Image Generation through Autoregressive Representation Alignment (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper