StreamingBench / README.md
mjuicem's picture
Update README.md
2bf1032 verified
metadata
task_categories:
  - question-answering
language:
  - en
size_categories:
  - 1K<n<10K
dataset_info:
  - config_name: Real_Time_Visual_Understanding
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: answer
        dtype: string
      - name: options
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Real_Time_Visual_Understanding
        num_examples: 2500
  - config_name: Sequential_Question_Answering
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: answer
        dtype: string
      - name: options
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Sequential_Question_Answering
        num_examples: 250
  - config_name: Contextual_Understanding
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: answer
        dtype: string
      - name: options
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Contextual_Understanding
        num_examples: 500
  - config_name: Omni_Source_Understanding
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: answer
        dtype: string
      - name: options
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Omni_Source_Understanding
        num_examples: 1000
  - config_name: Proactive_Output
    features:
      - name: question_id
        dtype: string
      - name: task_type
        dtype: string
      - name: question
        dtype: string
      - name: time_stamp
        dtype: string
      - name: ground_truth_time_stamp
        dtype: string
      - name: ground_truth_output
        dtype: string
      - name: frames_required
        dtype: string
      - name: temporal_clue_type
        dtype: string
    splits:
      - name: Proactive_Output
        num_examples: 250
configs:
  - config_name: Real_Time_Visual_Understanding
    data_files:
      - split: Real_Time_Visual_Understanding
        path: StreamingBench/Real_Time_Visual_Understanding.csv
  - config_name: Sequential_Question_Answering
    data_files:
      - split: Sequential_Question_Answering
        path: StreamingBench/Sequential_Question_Answering.csv
  - config_name: Contextual_Understanding
    data_files:
      - split: Contextual_Understanding
        path: StreamingBench/Contextual_Understanding.csv
  - config_name: Omni_Source_Understanding
    data_files:
      - split: Omni_Source_Understanding
        path: StreamingBench/Omni_Source_Understanding.csv
  - config_name: Proactive_Output
    data_files:
      - split: Proactive_Output
        path: StreamingBench/Proactive_Output_50.csv
      - split: Proactive_Output_250
        path: StreamingBench/Proactive_Output.csv

StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding

StreamingBench evaluates Multimodal Large Language Models (MLLMs) in real-time, streaming video understanding tasks. 🌟

🎞️ Overview

As MLLMs continue to advance, they remain largely focused on offline video comprehension, where all frames are pre-loaded before making queries. However, this is far from the human ability to process and respond to video streams in real-time, capturing the dynamic nature of multimedia content. To bridge this gap, StreamingBench introduces the first comprehensive benchmark for streaming video understanding in MLLMs.

Key Evaluation Aspects

  • 🎯 Real-time Visual Understanding: Can the model process and respond to visual changes in real-time?
  • πŸ”Š Omni-source Understanding: Does the model integrate visual and audio inputs synchronously in real-time video streams?
  • 🎬 Contextual Understanding: Can the model comprehend the broader context within video streams?

Dataset Statistics

  • πŸ“Š 900 diverse videos
  • πŸ“ 4,500 human-annotated QA pairs
  • ⏱️ Five questions per video at different timestamps

🎬 Video Categories

Video Categories

πŸ” Task Taxonomy

Task Taxonomy

πŸ”¬ Experimental Results

Performance of Various MLLMs on StreamingBench

  • All Context

    Task Taxonomy
  • 60 seconds of context preceding the query time

    Task Taxonomy
  • Comparison of Main Experiment vs. 60 Seconds of Video Context

  • Task Taxonomy

Performance of Different MLLMs on the Proactive Output Task

"≀ xs" means that the answer is considered correct if the actual output time is within x seconds of the ground truth.

Task Taxonomy

πŸ“ Citation

@article{lin2024streaming,
  title={StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding},
  author={Junming Lin and Zheng Fang and Chi Chen and Zihao Wan and Fuwen Luo and Peng Li and Yang Liu and Maosong Sun},
  journal={arXiv preprint arXiv:2411.03628},
  year={2024}
}

https://arxiv.org/abs/2411.03628