Datasets:
license: cc-by-nc-sa-4.0
configs:
- config_name: video_perspective
data_files: video_perspective.json
- config_name: question_perspective
data_files: question_perspective.json
- config_name: train
data_files: train.json
task_categories:
- video-text-to-text
🔥 News
2025.03.19
🌟 We released Favor-Bench, a new benchmark for fine-grained video motion understanding that spans both ego-centric and third-person perspectives with comprehensive evaluation including both close-ended QA tasks and open-ended descriptive tasks!
Introduction
Multimodal Large Language Models (MLLMs) have shown impressive video content understanding capabilities but struggle with fine-grained motion comprehension. To comprehensively assess the motion understanding ability of existing MLLMs, we introduce FAVOR-Bench, which comprises 1,776 videos from both ego-centric and third-person perspectives and enables assessment through both close-ended and open-ended tasks. For close-ended evaluation, we carefully design 8,184 multiple-choice question-answer pairs spanning six distinct sub-tasks. For open-ended evaluation, we employ the GPT-assisted evaluation and develop a novel cost-efficient LLM-free assessment method, where the latter can enhance benchmarking interpretability and accessibility. Comprehensive experiments with 21 state-of-the-art MLLMs reveal significant limitations in their ability to comprehend and describe detailed temporal dynamics in video motions. To alleviate this limitation, we further build FAVOR-Train, a dataset of 17,152 videos with fine-grained motion annotations. Finetuning Qwen2.5-VL on FAVOR-Train yields consistent improvements on motion-related tasks across TVBench, MotionBench and our FAVOR-Bench. Our assessment results demonstrate that the proposed FAVOR-Bench and FAVOR-Train provide valuable tools for the community to develop more powerful video understanding models.
Evaluation Tasks
Dataset
License
Our dataset is under the CC-BY-NC-SA-4.0 license.
FAVOR-Bench is only used for academic research. Commercial use in any form is prohibited. We do not own the copyright of any raw video files.
If there is any infringement in FAVOR-Bench, please contact [email protected] or directly raise an issue, and we will remove it immediately.
FAVOR-Bench Videos
We provide all self-collected video clips from TV series and animations in this space.
For publically available videos, you could download them from the original address:
1. Charades: https://prior.allenai.org/projects/charades
2. EgoTaskQA: https://sites.google.com/view/egotaskqa
FAVOR-Train Videos
For videos originated from Koala36M, we provide their Youtube links and start&end time. You could download them with tools like yt-dlp
.
For publically available videos, you could download them from the original address:
1. Charades-ego: https://prior.allenai.org/projects/charades-ego
2. EgoTaskQA: https://sites.google.com/view/egotaskqa
3. EgoExoLearn: https://huggingface.co/datasets/hyf015/EgoExoLearn
4. EgoExo4D: https://ego-exo4d-data.org/
For EgoExoLearn and EgoExo4D, you can crop the original videos according the start&end time provided in the JSON file by yourself.
JSON Files
For FAVOR-Bench, we provide both question-perspective and video-perspective dicts.
In the video-perspective file, each entry represents one video and we provide caption, camera motion, subject attributes, motion list, chronological motion list and all questions (question, options, correct answer, task type).
In question perspective, each entry represents a single question, including question, options, correct answer, task type, and the corresponding video name.
📈 Results
- Model Comparision:
- Benchmark Comparison:
- Benchmark Statistics:
Citation
If you find our work helpful for your research, please consider citing our work.
@misc{tu2025favor,
title={FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion Understanding},
author={Chongjun Tu and Lin Zhang and Pengtao Chen and Peng Ye and Xianfang Zeng and Wei Cheng and Gang Yu and Tao Chen},
year={2025},
eprint={2503.14935},
archivePrefix={arXiv},
primaryClass={cs.CV}
}