Datasets:
task_categories:
- question-answering
language:
- en
size_categories:
- 1K<n<10K
dataset_info:
- config_name: Real_Time_Visual_Understanding
features:
- name: question_id
dtype: string
- name: task_type
dtype: string
- name: question
dtype: string
- name: time_stamp
dtype: string
- name: answer
dtype: string
- name: options
dtype: string
- name: frames_required
dtype: string
- name: temporal_clue_type
dtype: string
splits:
- name: Real_Time_Visual_Understanding
num_examples: 2500
- config_name: Sequential_Question_Answering
features:
- name: question_id
dtype: string
- name: task_type
dtype: string
- name: question
dtype: string
- name: time_stamp
dtype: string
- name: answer
dtype: string
- name: options
dtype: string
- name: frames_required
dtype: string
- name: temporal_clue_type
dtype: string
splits:
- name: Sequential_Question_Answering
num_examples: 250
- config_name: Contextual_Understanding
features:
- name: question_id
dtype: string
- name: task_type
dtype: string
- name: question
dtype: string
- name: time_stamp
dtype: string
- name: answer
dtype: string
- name: options
dtype: string
- name: frames_required
dtype: string
- name: temporal_clue_type
dtype: string
splits:
- name: Contextual_Understanding
num_examples: 500
- config_name: Omni_Source_Understanding
features:
- name: question_id
dtype: string
- name: task_type
dtype: string
- name: question
dtype: string
- name: time_stamp
dtype: string
- name: answer
dtype: string
- name: options
dtype: string
- name: frames_required
dtype: string
- name: temporal_clue_type
dtype: string
splits:
- name: Omni_Source_Understanding
num_examples: 1000
- config_name: Proactive_Output
features:
- name: question_id
dtype: string
- name: task_type
dtype: string
- name: question
dtype: string
- name: time_stamp
dtype: string
- name: ground_truth_time_stamp
dtype: string
- name: ground_truth_output
dtype: string
- name: frames_required
dtype: string
- name: temporal_clue_type
dtype: string
splits:
- name: Proactive_Output
num_examples: 250
configs:
- config_name: Real_Time_Visual_Understanding
data_files:
- split: Real_Time_Visual_Understanding
path: StreamingBench/Real_Time_Visual_Understanding.csv
- config_name: Sequential_Question_Answering
data_files:
- split: Sequential_Question_Answering
path: StreamingBench/Sequential_Question_Answering.csv
- config_name: Contextual_Understanding
data_files:
- split: Contextual_Understanding
path: StreamingBench/Contextual_Understanding.csv
- config_name: Omni_Source_Understanding
data_files:
- split: Omni_Source_Understanding
path: StreamingBench/Omni_Source_Understanding.csv
- config_name: Proactive_Output
data_files:
- split: Proactive_Output
path: StreamingBench/Proactive_Output_50.csv
- split: Proactive_Output_250
path: StreamingBench/Proactive_Output.csv
StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding
StreamingBench evaluates Multimodal Large Language Models (MLLMs) in real-time, streaming video understanding tasks. π
ποΈ Overview
As MLLMs continue to advance, they remain largely focused on offline video comprehension, where all frames are pre-loaded before making queries. However, this is far from the human ability to process and respond to video streams in real-time, capturing the dynamic nature of multimedia content. To bridge this gap, StreamingBench introduces the first comprehensive benchmark for streaming video understanding in MLLMs.
Key Evaluation Aspects
- π― Real-time Visual Understanding: Can the model process and respond to visual changes in real-time?
- π Omni-source Understanding: Does the model integrate visual and audio inputs synchronously in real-time video streams?
- π¬ Contextual Understanding: Can the model comprehend the broader context within video streams?
Dataset Statistics
- π 900 diverse videos
- π 4,500 human-annotated QA pairs
- β±οΈ Five questions per video at different timestamps
π¬ Video Categories
π Task Taxonomy
π¬ Experimental Results
Performance of Various MLLMs on StreamingBench
All Context
60 seconds of context preceding the query time
Comparison of Main Experiment vs. 60 Seconds of Video Context
Performance of Different MLLMs on the Proactive Output Task
"β€ xs" means that the answer is considered correct if the actual output time is within x seconds of the ground truth.
π Citation
@article{lin2024streaming,
title={StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding},
author={Junming Lin and Zheng Fang and Chi Chen and Zihao Wan and Fuwen Luo and Peng Li and Yang Liu and Maosong Sun},
journal={arXiv preprint arXiv:2411.03628},
year={2024}
}