video
video |
---|
VCR-Bench ( A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning)
π Homepage | π€ Dataset | π€ Paper | π arXiv | GitHub
Dataset Details
As shown in the figure below, current video benchmarks often lack comprehensive annotations of CoT steps, focusing only on the accuracy of final answers during model evaluation while neglecting the quality of the reasoning process. This evaluation approach makes it difficult to comprehensively evaluate modelβs actual drawbacks during the CoT reasoning process.
To fill this gap, we propose VCR-Bench, a benchmark specifically designed to evaluate the Video Chain-of-Thought Reasoning capabilities of LVLMs.
π― In VCR-Bench, we have constructed a multi-dimensional evaluation framework, defining 7 distinct task dimensions that comprehensively cover a diverse range of video types and durations. For each data sample, in addition to providing a standard answer, we have meticulously curated detailed and accurate reference stepwise rationals as CoT annotation.
To ensure the diversity of video data and the richness of sample information, we curated the VCR-Bench by selecting and integrating data from 14 existing video bench-marks. These include datasets focused on video perception and comprehension, datasets targeting subject knowledge understanding and reasoning, datasets emphasizing long-form video understanding, datasets specialized in video temporal localization and analysis and datasets dedicated to video scene reasoning.
All samples underwent rigorous manual annotation and quality control, ultimately resulting in the creation of VCR-Bench, which includes 859 videos and 1,034 high-quality question-answer pairs.
π Mini-Leaderboard
We show a mini-leaderboard here and please find more information in our paper or homepage.
Model | Avg |
---|---|
o1 | 56.7 |
Gemini-2.0-Flash | 51.7 |
GPT-4o | 52.1 |
GPT4V (low) | 46.9 |
Gemini-1.5-Pro | 44.0 |
Claude 3.5 Sonnet | 41.0 |
Aria-25B | 38.2 |
Qwen2.5-VL-72B | 37.9 |
LLaVA-Video-72B | 36.6 |
LLaVA-OneVision-72B | 36.4 |
InternVideo2.5-8B | 33.0 |
LLaVA-Video-7B | 32.5 |
VideoLLaMA3-7B | 32.5 |
InternVL2.5-78B | 30.9 |
LLaVA-OneVision-7B | 30.7 |
Qwen2.5-VL-7B | 30.4 |
MiniCPM-o2.6-8B | 26.9 |
InternVL2.5-8B | 23.9 |
mPLUG-Owl3-7B | 7.3 |
Llama-3.2-11B-Vision | 4.9 |
- Downloads last month
- 42