Let Multimodal Embedders Learn When to Augment Query via Adaptive Query Augmentation
Abstract
M-Solomon, a multimodal embedder, adaptively augments queries using a Multimodal LLM, improving performance and reducing embedding latency compared to baselines.
Query augmentation makes queries more meaningful by appending further information to the queries to find relevant documents. Current studies have proposed Large Language Model (LLM)-based embedders, which learn representation for embedding and generation for query augmentation in a multi-task manner by leveraging the generative capabilities of LLM. During inference, these jointly trained embedders have conducted query augmentation followed by embedding, showing effective results. However, augmenting every query leads to substantial embedding latency and query augmentation can be detrimental to performance for some queries. Also, previous methods have not been explored in multimodal environments. To tackle these problems, we propose M-Solomon, a universal multimodal embedder that can adaptively determine when to augment queries. Our approach first divides the queries of the training datasets into two groups at the dataset level. One includes queries that require augmentation and the other includes queries that do not. Then, we introduces a synthesis process that generates appropriate augmentations for queries that require them by leveraging a powerful Multimodal LLM (MLLM). Next, we present adaptive query augmentation. Through this step, M-Solomon can conduct query augmentation only when necessary by learning to generate synthetic augmentations with the prefix /augment for queries that demand them and to generate the simple string /embed for others. Experimental results showed that M-Solomon not only surpassed the baseline without augmentation by a large margin but also outperformed the baseline that always used augmentation, providing much faster embedding latency.
Community
We propose M-Solomon, a universal multimodal embedder that can adaptively determine when to augment queries.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MetaEmbed: Scaling Multimodal Retrieval at Test-Time with Flexible Late Interaction (2025)
- ReasonEmbed: Enhanced Text Embeddings for Reasoning-Intensive Document Retrieval (2025)
- Large Reasoning Embedding Models: Towards Next-Generation Dense Retrieval Paradigm (2025)
- E2Rank: Your Text Embedding can Also be an Effective and Efficient Listwise Reranker (2025)
- Learning Contextual Retrieval for Robust Conversational Search (2025)
- Beyond Single Embeddings: Capturing Diverse Targets with Multi-Query Retrieval (2025)
- UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper