VCRBench / README.md
pritamqu's picture
Update README.md
f3e6839 verified
metadata
language:
  - en
license: mit
pretty_name: VCRBench
task_categories:
  - video-text-to-text
  - visual-question-answering
tags:
  - video
  - multimodal
  - video-language
  - causal-reasoning
  - multi-step-reasoning
  - long-form-reasoning
  - large-video-language-model
  - large-multimodal-model
  - multimodal-large-language-model
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: test
        path: data.json

VCRBench: Exploring Long-form Causal Reasoning Capabilities of Large Video Language Models

Authors: Pritam Sarkar and Ali Etemad

This repository provides the official implementation of VCRBench.

Usage

Please check our GitHub repo for the details of usage: VCRBench

from dataset import VCRBench
dataset=VCRBench(question_file="data.json", 
                video_root="./",
                mode='default', 
                )
    
for sample in dataset:
    print(sample['question'], )
    print(sample['answer'], )
    print('*'*10)

    break

Licensing Information

This dataset incorporates samples from CrossTask that are subject to their respective original licenses. Users must adhere to the terms and conditions specified by these licenses. This project does not impose any additional constraints beyond those stipulated in the original licenses. Users must ensure their usage complies with all applicable laws and regulations. This repository is released under the MIT. See LICENSE for details.

Citation Information

If you find this work useful, please use the given bibtex entry to cite our work:

@misc{sarkar2025vcrbench,
      title={VCRBench: Exploring Long-form Causal Reasoning Capabilities of Large Video Language Models}, 
      author={Pritam Sarkar and Ali Etemad},
      year={2025},
      eprint={2505.08455},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
}

Contact

For any queries please create an issue at VCRBench.