You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
You need to agree to the following terms to access this dataset
Log in or Sign Up to review the conditions and access this dataset content.
OmniVideoBench: Towards Audio-Visual Understanding Evaluation for Omni MLLMs
β¨ Overview
Recent advances in multimodal large language models (MLLMs) have brought remarkable progress in video understanding.
However, most existing benchmarks fail to jointly evaluate both audio and visual reasoning β often focusing on one modality or overlooking their interaction.
π¬ OmniVideoBench fills this gap.
Itβs a large-scale, rigorously curated benchmark for assessing synergistic audio-visual intelligence, emphasizing modality complementarity, logical consistency, and long-term temporal reasoning.
- 1,000 high-quality QA pairs
- 628 diverse videos (seconds β 30 minutes)
- Each annotated with step-by-step multimodal reasoning
- Evaluations reveal a large gap between models and human reasoning
Figure 1. OmniVideoBench overview β βVβ indicates visual reasoning and βAβ indicates audio reasoning. Each example includes atomic reasoning traces.
π§ Diverse Reasoning Dimensions
OmniVideoBench tests deep audio-visual reasoning across a wide variety of tasks and modalities:
- 628 videos from 8 major categories & 68 subcategories
- 1,000 QA pairs with detailed reasoning chains
- 13 reasoning types, from perception to causal inference
- AudioβVisual Complementarity ensured for every question
- Long-Video Evaluation: durations up to 30 minutes
Figure 2. OmniVideoBench covers broad categories and reasoning types. Distributions show video durations and three audio types (Speech, Sound, Music).
π§© Pipeline
A glance at how OmniVideoBench was built β from raw videos to verified reasoning annotations π
- π₯ Video Collection: Gather long-form videos from diverse domains and acoustic environments.
- βοΈ Clip Segmentation: Divide videos into context-preserving segments.
- π Question Generation: Design multimodal questions that require both audio and visual reasoning.
- π Reasoning Decomposition: Break down each QA into atomic reasoning steps (audio / visual / both).
- π§Ύ Annotation & Verification: Human experts verify correctness, modality alignment, and logical flow.
- π¦ Quality Filtering: Remove ambiguous or low-quality samples through multi-stage review.
- π¦ Formatting & Packaging: Structure QA data in standardized JSON and create benchmark splits.
Figure 3. Data construction and refinement pipeline of OmniVideoBench.
π License
Our dataset is under the CC-BY-NC-SA-4.0 license.
β οΈ If you need to access and use our dataset, you must understand and agree: This dataset is for research purposes only and cannot be used for any commercial or other purposes. The user assumes all effects arising from any other use and dissemination.
We do not own the copyright of any raw video files. Currently, we provide video access to researchers under the condition of acknowledging the above license. For the video data used, we respect and acknowledge any copyrights of the video authors.
If the original authors of the related works still believe that the videos should be removed, please contact [email protected] or directly raise an issue.
π Dataset Access
Please contact [email protected] to get full dataset.
πͺΆ Citation
If you find OmniVideoBench useful for your research, please cite:
@misc{li2025omnivideobenchaudiovisualunderstandingevaluation,
title={OmniVideoBench: Towards Audio-Visual Understanding Evaluation for Omni MLLMs},
author={Caorui Li and Yu Chen and Yiyan Ji and Jin Xu and Zhenyu Cui and Shihao Li and Yuanxing Zhang and Jiafu Tang and Zhenghao Song and Dingling Zhang and Ying He and Haoxiang Liu and Yuxuan Wang and Qiufeng Wang and Zhenhe Wu and Jiehui Luo and Zhiyu Pan and Weihao Xie and Chenchen Zhang and Zhaohui Wang and Jiayi Tian and Yanghai Wang and Zhe Cao and Minxin Dai and Ke Wang and Runzhe Wen and Yinghao Ma and Yaning Pan and Sungkyun Chang and Termeh Taheri and Haiwen Xia and Christos Plachouras and Emmanouil Benetos and Yizhi Li and Ge Zhang and Jian Yang and Tianhao Peng and Zili Wang and Minghao Liu and Junran Peng and Zhaoxiang Zhang and Jiaheng Liu},
year={2025},
eprint={2510.10689},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2510.10689},
}
- Downloads last month
- 92