UNIDOC-BENCH: A Unified Benchmark for Document-Centric Multimodal RAG
Abstract
UniDoc-Bench is a large-scale benchmark for multimodal retrieval-augmented generation, evaluating systems across text, images, and their fusion in real-world document-centric scenarios.
Multimodal retrieval-augmented generation (MM-RAG) is a key approach for applying large language models (LLMs) and agents to real-world knowledge bases, yet current evaluations are fragmented, focusing on either text or images in isolation or on simplified multimodal setups that fail to capture document-centric multimodal use cases. In this paper, we introduce UniDoc-Bench, the first large-scale, realistic benchmark for MM-RAG built from 70k real-world PDF pages across eight domains. Our pipeline extracts and links evidence from text, tables, and figures, then generates 1,600 multimodal QA pairs spanning factual retrieval, comparison, summarization, and logical reasoning queries. To ensure reliability, 20% of QA pairs are validated by multiple annotators and expert adjudication. UniDoc-Bench supports apples-to-apples comparison across four paradigms: (1) text-only, (2) image-only, (3) multimodal text-image fusion, and (4) multimodal joint retrieval -- under a unified protocol with standardized candidate pools, prompts, and evaluation metrics. Our experiments show that multimodal text-image fusion RAG systems consistently outperform both unimodal and jointly multimodal embedding-based retrieval, indicating that neither text nor images alone are sufficient and that current multimodal embeddings remain inadequate. Beyond benchmarking, our analysis reveals when and how visual context complements textual evidence, uncovers systematic failure modes, and offers actionable guidance for developing more robust MM-RAG pipelines.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Zero-shot Multimodal Document Retrieval via Cross-modal Question Generation (2025)
- Transforming Questions and Documents for Semantically Aligned Retrieval-Augmented Generation (2025)
- CMRAG: Co-modality-based visual document retrieval and question answering (2025)
- MoLoRAG: Bootstrapping Document Understanding via Multi-modal Logic-aware Retrieval (2025)
- LAD-RAG: Layout-aware Dynamic RAG for Visually-Rich Document Understanding (2025)
- Multimodal Iterative RAG for Knowledge-Intensive Visual Question Answering (2025)
- DocHop-QA: Towards Multi-Hop Reasoning over Multimodal Document Collections (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper