--- dataset_info: features: - name: video dtype: string - name: videoType dtype: string - name: question dtype: string - name: options sequence: string - name: correctAnswer dtype: string - name: abilityType_L2 dtype: string - name: abilityType_L3 dtype: string - name: question_idx dtype: int64 splits: - name: test num_bytes: 1135911 num_examples: 1257 download_size: 586803 dataset_size: 1135911 configs: - config_name: default data_files: - split: test path: data/test-* task_categories: - video-text-to-text ---

# MMR-V: *Can MLLMs Think with Video?* A Benchmark for Multimodal Deep Reasoning in Videos

📝 Paper | 💻 Code | 🏠 Homepage

## 👀 MMR-V Data Card ("Think with Video") The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to 🕵️locate multi-frame evidence and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match frames mentioned in the question (referred to as "question frame") and perceive a few adjacent frames. To address this gap, we propose **MMR-V: A Benchmark for Multimodal Deep Reasoning in Videos**. MMR-V consists of **317** videos and **1,257** tasks. Models like o3 and o4-mini have achieved impressive results on image reasoning tasks by leveraging tool use to enable 🕵️evidence mining on images. Similarly, tasks in MMR-V require models to perform in-depth reasoning and analysis over visual information from different frames of a video, challenging their ability to 🕵️**think with video and mine evidence across long-range multi-frame**. ## 🎬 MMR-V Task Examples

## 📚 Evaluation 1. Load the MMR-V Videos ```shell huggingface-cli download JokerJan/MMR-VBench --repo-type dataset --local-dir MMR-V --local-dir-use-symlinks False ``` 2. Extract videos from the `.tar` files: ```shell cat videos.tar.part.* > videos.tar tar -xvf videos.tar ``` 3. Load MMR-V Benchmark: ```shell samples = load_dataset("JokerJan/MMR-VBench", split='test') ``` ## 🎯 Experiment Results

## Dataset Details Curated by: MMR-V Team Language(s) (NLP): English License: CC-BY 4.0