MMSI-Bench / README.md
RunsenXu's picture
Upload folder using huggingface_hub
0c635a6 verified
metadata
language:
  - en
license: cc-by-4.0
size_categories:
  - 1K<n<10K
task_categories:
  - question-answering
  - visual-question-answering
  - multiple-choice
pretty_name: MMSI-Bench
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: images
      sequence: image
    - name: question_type
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: thought
      dtype: string
  splits:
    - name: test
      num_examples: 1000
configs:
  - config_name: default
    data_files:
      - split: test
        path: MMSI_Bench.parquet

MMSI-Bench

This repo contains evaluation code for the paper "MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence"

🌐 Homepage | πŸ€— Dataset | πŸ“‘ Paper | πŸ’» Code | πŸ“– arXiv

πŸ””News

πŸ”₯[2025-06-18]: MMSI-Bench has been supported in the LMMs-Eval repository.

✨[2025-06-11]: MMSI-Bench was used for evaluation in the experiments of VILASR.

πŸ”₯[2025-06-9]: MMSI-Bench has been supported in the VLMEvalKit repository.

πŸ”₯[2025-05-30]: We released the ArXiv paper.

Load Dataset

from datasets import load_dataset

mmsi_bench = load_dataset("RunsenXu/MMSI-Bench")
print(mmsi_bench)

After downloading the parquet file, read each record, decode images from binary, and save them as JPG files.

import pandas as pd
import os

df = pd.read_parquet('MMSI_Bench.parquet')

output_dir = './images'
os.makedirs(output_dir, exist_ok=True)

for idx, row in df.iterrows():
    id_val = row['id']
    images = row['images']  
    question_type = row['question_type']
    question = row['question']
    answer = row['answer']
    thought = row['thought']

    image_paths = []
    if images is not None:
        for n, img_data in enumerate(images):
            image_path = f"{output_dir}/{id_val}_{n}.jpg"
            with open(image_path, "wb") as f:
                f.write(img_data)
            image_paths.append(image_path)
    else:
        image_paths = []

    print(f"id: {id_val}")
    print(f"images: {image_paths}")
    print(f"question_type: {question_type}")
    print(f"question: {question}")
    print(f"answer: {answer}")
    print(f"thought: {thought}")
    print("-" * 50)

Evaluation

Please refer to the evaluation guidelines of VLMEvalKit

πŸ† MMSI-Bench Leaderboard

Model Avg. (%) Type
πŸ₯‡ Human Level 97.2 Baseline
πŸ₯ˆ o3 41.0 Proprietary
πŸ₯‰ GPT-4.5 40.3 Proprietary
Gemini-2.5-Pro--Thinking 37.0 Proprietary
Gemini-2.5-Pro 36.9 Proprietary
Doubao-1.5-pro 33.0 Proprietary
GPT-4.1 30.9 Proprietary
Qwen2.5-VL-72B 30.7 Open-source
NVILA-15B 30.5 Open-source
GPT-4o 30.3 Proprietary
Claude-3.7-Sonnet--Thinking 30.2 Proprietary
Seed1.5-VL 29.7 Proprietary
InternVL2.5-2B 29.0 Open-source
InternVL2.5-8B 28.7 Open-source
DeepSeek-VL2-Small 28.6 Open-source
InternVL3-78B 28.5 Open-source
InternVL2.5-78B 28.5 Open-source
LLaVA-OneVision-72B 28.4 Open-source
NVILA-8B 28.1 Open-source
InternVL2.5-26B 28.0 Open-source
DeepSeek-VL2 27.1 Open-source
InternVL3-1B 27.0 Open-source
InternVL3-9B 26.7 Open-source
Qwen2.5-VL-3B 26.5 Open-source
InternVL2.5-1B 26.1 Open-source
InternVL2.5-4B 26.3 Open-source
Qwen2.5-VL-7B 25.9 Open-source
InternVL3-8B 25.7 Open-source
Llama-3.2-11B-Vision 25.4 Open-source
InternVL3-2B 25.3 Open-source
πŸƒ Random Guessing 25.0 Baseline
LLaVA-OneVision-7B 24.5 Open-source
DeepSeek-VL2-Tiny 24.0 Open-source
Blind GPT-4o 22.7 Baseline

Acknowledgment

MMSI-Bench makes use of data from existing image datasets: ScanNet, nuScenes, Matterport3D, Ego4D, AgiBot-World, DTU, DAVIS-2017 ,and Waymo. We thank these teams for their open-source contributions.

Contact

Citation

@article{yang2025mmsi,
  title={MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence},
  author={Yang, Sihan and Xu, Runsen and Xie, Yiman and Yang, Sizhe and Li, Mo and Lin, Jingli and Zhu, Chenming and Chen, Xiaochen and Duan, Haodong and Yue, Xiangyu and Lin, Dahua and Wang, Tai and Pang, Jiangmiao},
  journal={arXiv preprint arXiv:2505.23764},
  year={2025}
}