MMR-VBench / README.md
JokerJan's picture
Update README.md
fded5ec verified
metadata
dataset_info:
  features:
    - name: video
      dtype: string
    - name: videoType
      dtype: string
    - name: question
      dtype: string
    - name: options
      sequence: string
    - name: correctAnswer
      dtype: string
    - name: abilityType_L2
      dtype: string
    - name: abilityType_L3
      dtype: string
    - name: question_idx
      dtype: int64
  splits:
    - name: test
      num_bytes: 1135911
      num_examples: 1257
  download_size: 586803
  dataset_size: 1135911
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
task_categories:
  - video-text-to-text

MMR-V: Can MLLMs Think with Video? A Benchmark for Multimodal Deep Reasoning in Videos

๐Ÿ“ Paper | ๐Ÿ’ป Code | ๐Ÿ  Homepage

๐Ÿ‘€ MMR-V Data Card ("Think with Video")

The sequential structure of videos poses a challenge to the ability of multimodal large language models (MLLMs) to ๐Ÿ•ต๏ธlocate multi-frame evidence and conduct multimodal reasoning. However, existing video benchmarks mainly focus on understanding tasks, which only require models to match frames mentioned in the question (referred to as "question frame") and perceive a few adjacent frames. To address this gap, we propose MMR-V: A Benchmark for Multimodal Deep Reasoning in Videos. MMR-V consists of 317 videos and 1,257 tasks. Models like o3 and o4-mini have achieved impressive results on image reasoning tasks by leveraging tool use to enable ๐Ÿ•ต๏ธevidence mining on images. Similarly, tasks in MMR-V require models to perform in-depth reasoning and analysis over visual information from different frames of a video, challenging their ability to ๐Ÿ•ต๏ธthink with video and mine evidence across long-range multi-frame.

๐ŸŽฌ MMR-V Task Examples

๐Ÿ“š Evaluation

  1. Load the MMR-V Videos
huggingface-cli download JokerJan/MMR-VBench --repo-type dataset --local-dir MMR-V --local-dir-use-symlinks False
  1. Extract videos from the .tar files:
cat videos.tar.part.* > videos.tar
tar -xvf videos.tar
  1. Load MMR-V Benchmark:
samples = load_dataset("JokerJan/MMR-VBench", split='test')

๐ŸŽฏ Experiment Results

Dataset Details

Curated by: MMR-V Team

Language(s) (NLP): English

License: CC-BY 4.0