Multi-SpatialMLLM: Multi-Frame Spatial Understanding with Multi-Modal Large Language Models
Abstract
Multi-SpatialMLLM framework enhances MLLMs with multi-frame spatial understanding through depth perception, visual correspondence, and dynamic perception, achieving significant gains in multi-frame reasoning tasks.
Multi-modal large language models (MLLMs) have rapidly advanced in visual tasks, yet their spatial understanding remains limited to single images, leaving them ill-suited for robotics and other real-world applications that require multi-frame reasoning. In this paper, we propose a framework to equip MLLMs with robust multi-frame spatial understanding by integrating depth perception, visual correspondence, and dynamic perception. Central to our approach is the MultiSPA dataset, a novel, large-scale collection of more than 27 million samples spanning diverse 3D and 4D scenes. Alongside MultiSPA, we introduce a comprehensive benchmark that tests a wide spectrum of spatial tasks under uniform metrics. Our resulting model, Multi-SpatialMLLM, achieves significant gains over baselines and proprietary systems, demonstrating scalable, generalizable multi-frame reasoning. We further observe multi-task benefits and early indications of emergent capabilities in challenging scenarios, and showcase how our model can serve as a multi-frame reward annotator for robotics.
Community
We contribute an MLLM, dataset, and benchmark for multi-frame spatial understanding.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D (2025)
- STI-Bench: Are MLLMs Ready for Precise Spatial-Temporal World Understanding? (2025)
- Extending Large Vision-Language Model for Diverse Interactive Tasks in Autonomous Driving (2025)
- NuScenes-SpatialQA: A Spatial Understanding and Reasoning Benchmark for Vision-Language Models in Autonomous Driving (2025)
- ST-VLM: Kinematic Instruction Tuning for Spatio-Temporal Reasoning in Vision-Language Models (2025)
- SSR: Enhancing Depth Perception in Vision-Language Models via Rationale-Guided Spatial Reasoning (2025)
- SpatialPrompting: Keyframe-driven Zero-Shot Spatial Reasoning with Off-the-Shelf Multimodal Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper