RunsenXu commited on
Commit
7094d74
Β·
verified Β·
1 Parent(s): eb8e526

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. MMSI_Bench.parquet +3 -0
  2. README.md +80 -0
MMSI_Bench.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c762def08ff9875455672f1ace2c44a9705b963d2e8f806b186a250399dc9017
3
+ size 704663038
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MMSI-Bench
2
+ This repo contains evaluation code for the paper "[MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence]"
3
+
4
+ [**🌐 Homepage**](https://runsenxu.com/projects/MMSI_Bench/) | [**πŸ€— Dataset**](https://huggingface.co/datasets/RunsenXu/MMSI-Bench) | [**πŸ“‘ Paper**] | [**πŸ’» Code**](https://github.com/OpenRobotLab/MMSI_Bench/tree/main) | [**πŸ“– arXiv**] |
5
+
6
+
7
+
8
+ ## πŸ””News
9
+ **πŸ”₯[2025-05-31]: MMSI-Bench has been supported in the [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repository.**
10
+
11
+ **πŸ”₯[2025-05-30]: We released the ArXiv paper.**
12
+
13
+ ## Load Dataset
14
+ ```
15
+ from datasets import load_dataset
16
+
17
+ vsi_bench = load_dataset("RunsenXu/MMSI-Bench")
18
+ print(dataset)
19
+ ```
20
+
21
+
22
+ ## Evaluation
23
+ Please refer to the [evaluation guidelines](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Quickstart.md) of [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)
24
+
25
+ <!-- <img src="assets/radar_v1.png" width="400" /> -->
26
+
27
+ ## πŸ† MMSI-Bench Leaderboard
28
+
29
+ | Model | Avg. (%) | Type |
30
+ |------------------------------|:--------:|:-------------|
31
+ | πŸ₯‡ **Human Level** | 97.2 | Baseline |
32
+ | πŸ₯ˆ o3 | 41.0 | Proprietary |
33
+ | πŸ₯‰ GPT-4.5 | 40.3 | Proprietary |
34
+ | Gemini-2.5-Pro--Thinking | 37.0 | Proprietary |
35
+ | Gemini-2.5-Pro | 36.9 | Proprietary |
36
+ | Doubao-1.5-pro | 33.0 | Proprietary |
37
+ | Qwen2.5-VL-72B | 30.7 | Open-source |
38
+ | NVILA-15B | 30.5 | Open-source |
39
+ | GPT-4.1 | 30.9 | Proprietary |
40
+ | GPT-4o | 30.3 | Proprietary |
41
+ | Claude-3.7-Sonnet--Thinking | 30.2 | Proprietary |
42
+ | Seed1.5-VL | 29.7 | Proprietary |
43
+ | DeepSeek-VL2-Small | 28.6 | Open-source |
44
+ | InternVL2.5-8B | 28.7 | Open-source |
45
+ | InternVL3-78B | 28.5 | Open-source |
46
+ | InternVL2.5-78B | 28.5 | Open-source |
47
+ | LLaVA-OneVision-72B | 28.4 | Open-source |
48
+ | InternVL2.5-2B | 29.0 | Open-source |
49
+ | InternVL2.5-26B | 28.0 | Open-source |
50
+ | NVILA-8B | 28.1 | Open-source |
51
+ | DeepSeek-VL2 | 27.1 | Open-source |
52
+ | InternVL3-1B | 27.0 | Open-source |
53
+ | InternVL3-9B | 26.7 | Open-source |
54
+ | Qwen2.5-VL-3B | 26.5 | Open-source |
55
+ | InternVL2.5-1B | 26.1 | Open-source |
56
+ | InternVL2.5-4B | 26.3 | Open-source |
57
+ | InternVL3-8B | 25.7 | Open-source |
58
+ | Qwen2.5-VL-7B | 25.9 | Open-source |
59
+ | InternVL3-2B | 25.3 | Open-source |
60
+ | Llama-3.2-11B-Vision | 25.4 | Open-source |
61
+ | πŸƒ **Random Guessing** | 25.0 | Baseline |
62
+ | LLaVA-OneVision-7B | 24.5 | Open-source |
63
+ | DeepSeek-VL2-Tiny | 24.0 | Open-source |
64
+ | Blind GPT-4o | 22.7 | Baseline |
65
+
66
+
67
+
68
+ ## Acknowledgment
69
+ MMSI-Bench makes use of data from existing image datasets: [ScanNet](http://www.scan-net.org/), [nuScenes](https://www.nuscenes.org/), [Matterport3D](https://niessner.github.io/Matterport/), [Ego4D](https://ego4d-data.org/), [AgiBot-World](https://agibot-world.cn/), [DTU](https://roboimagedata.compute.dtu.dk/?page_id=36), [DAVIS-2017](https://davischallenge.org/) ,and [Waymo](https://waymo.com/open/). We thank these teams for their open-source contributions.
70
+
71
+ ## Contact
72
+ - Sihan Yang: [email protected]
73
+ - Runsen Xu: [email protected]
74
+
75
+ ## Citation
76
+
77
+ **BibTeX:**
78
+ ```bibtex
79
+
80
+ ```