verify-teaser / README.md
jing-bi's picture
Update README.md
19be410 verified
metadata
license: mit
arxiv: 2503.11557
task_categories:
  - image-to-text
language:
  - en
tags:
  - MLLM
  - Reasoning
pretty_name: VERIFY
size_categories:
  - n<1K
dataset_info:
  features:
    - name: uid
      dtype: string
    - name: image
      dtype: image
    - name: question
      dtype: string
    - name: options
      sequence: string
    - name: reasoning
      dtype: string
    - name: answer
      dtype: string
    - name: year
      dtype: int32
  splits:
    - name: test
      num_bytes: 4313494
      num_examples: 50
  download_size: 4289292
  dataset_size: 4313494
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

VERIFY: A Benchmark of Visual Explanation and Reasoning for Investigating Multimodal Reasoning FidelitY

VERIFY is the first benchmark explicitly designed to assess the reasoning paths of MLLMs in visual reasoning tasks. By introducing novel evaluation metrics that go beyond mere accuracy, VERIFY highlights critical limitations in current MLLMs and emphasizes the need for a more balanced approach to visual perception and logical reasoning.

Details of the benchmark can viewed at the VERIFY project page.

🔔 Teaser:
This teaser is provided for interested users. Simply copy and paste the image to quickly try out the advanced O1 or Gemini model.

Usage

You can download this dataset by the following command (make sure that you have installed Huggingface Datasets):

from datasets import load_dataset
from IPython.display import display
dataset = load_dataset("jing-bi/verify-teaser")
example = dataset["test"][0]
print("Full example:", example)
display(example['image'])
print("Problem ID:", example['uid'])
print("Question:", example['question'])
print("Options:", example['options'])
print("Reasoning:", example['reasoning'])
print("Answer:", example['answer'])

Contact

For any questions or further information, please contact:


Citation

If you find this work useful in your research, please consider citing our paper:

@misc{bi2025verify,
    title={VERIFY: A Benchmark of Visual Explanation and Reasoning for Investigating Multimodal Reasoning Fidelity},
    author={Jing Bi and Junjia Guo and Susan Liang and Guangyu Sun and Luchuan Song and Yunlong Tang and Jinxi He and Jiarui Wu and Ali Vosoughi and Chen Chen and Chenliang Xu},
    year={2025},
    eprint={2503.11557},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}