Datasets:
Tasks:
Text Generation
Formats:
csv
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
multimodal
License:
metadata
license: cc
configs:
- config_name: default
data_files:
- split: default
path: data.csv
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
tags:
- multimodal
pretty_name: MCiteBench
MCiteBench Dataset
MCiteBench is a benchmark to evaluate multimodal citation text generation in Multimodal Large Language Models (MLLMs).
- Websites: https://caiyuhu.github.io/MCiteBench
- Paper: https://arxiv.org/abs/2503.02589
- Code: https://github.com/caiyuhu/MCiteBench
Data Download
Please download the MCiteBench_full_dataset.zip
. It contains the data.jsonl
file and the visual_resources
folder.
Data Statistics

Data Format
The data format for data_example.jsonl
and data.jsonl
is as follows:
question_type: [str] # The type of question, with possible values: "explanation" or "locating"
question: [str] # The text of the question
answer: [str] # The answer to the question, which can be a string, list, float, or integer, depending on the context
evidence_keys: [list] # A list of abstract references or identifiers for evidence, such as "section x", "line y", "figure z", or "table k".
# These are not the actual content but pointers or descriptions indicating where the evidence can be found.
# Example: ["section 2.1", "line 45", "Figure 3"]
evidence_contents: [list] # A list of resolved or actual evidence content corresponding to the `evidence_keys`.
# These can include text excerpts, image file paths, or table file paths that provide the actual evidence for the answer.
# Each item in this list corresponds directly to the same-index item in `evidence_keys`.
# Example: ["This is the content of section 2.1.", "/path/to/figure_3.jpg"]
evidence_modal: [str] # The modality type of the evidence, with possible values: ['figure', 'table', 'text', 'mixed'] indicating the source type of the evidence
evidence_count: [int] # The total count of all evidence related to the question
distractor_count: [int] # The total number of distractor items, meaning information blocks that are irrelevant or misleading for the answer
info_count: [int] # The total number of information blocks in the document, including text, tables, images, etc.
text_2_idx: [dict[str, str]] # A dictionary mapping text information to corresponding indices
idx_2_text: [dict[str, str]] # A reverse dictionary mapping indices back to the corresponding text content
image_2_idx: [dict[str, str]] # A dictionary mapping image paths to corresponding indices
idx_2_image: [dict[str, str]] # A reverse dictionary mapping indices back to image paths
table_2_idx: [dict[str, str]] # A dictionary mapping table paths to corresponding indices
idx_2_table: [dict[str, str]] # A reverse dictionary mapping indices back to table paths
meta_data: [dict] # Additional metadata used during the construction of the data
distractor_contents: [list] # Similar to `evidence_contents`, but contains distractors, which are irrelevant or misleading information
question_id: [str] # The ID of the question
pdf_id: [str] # The ID of the associated PDF document
Citation
If you find MCiteBench useful for your research and applications, please kindly cite using this BibTeX:
@article{hu2025mcitebench,
title={MciteBench: A Benchmark for Multimodal Citation Text Generation in MLLMs},
author={Hu, Caiyu and Zhang, Yikai and Zhu, Tinghui and Ye, Yiwei and Xiao, Yanghua},
journal={arXiv preprint arXiv:2503.02589},
year={2025}
}