A newer version of this model is available: remyxai/SpaceOm

SpaceQwen2.5-VL-3B-Instruct

The model was presented in the paper OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models. More information can be found at the project page.

  • Model Type: Multimodal, Vision-Language Model
  • Architecture: Qwen2.5-VL-3B-Instruct
  • Model Size: 3.75B parameters (FP16)
  • Finetuned from: Qwen/Qwen2.5-VL-3B-Instruct
  • Finetune Strategy: LoRA (Low-Rank Adaptation)
  • License: Apache-2.0

Model Overview

This model uses data synthesis techniques and publicly available models to reproduce the work described in SpatialVLM to enhance the spatial reasoning of multimodal models. With a pipeline of expert models, we can infer spatial relationships between objects in a scene to create a VQA dataset for spatial reasoning.

Running SpaceQwen2.5-VL-3B-Instruct

Ollama

To launch with ollama, run:

ollama run hf.co/remyxai/SpaceQwen2.5-VL-3B-Instruct:latest

Transformers

Install qwen dependencies:

pip install qwen-vl-utils[decord]==0.0.8

To run inference on a sample image:

from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "remyxai/SpaceQwen2.5-VL-3B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("remyxai/SpaceQwen2.5-VL-3B-Instruct")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://raw.githubusercontent.com/remyxai/VQASynth/refs/heads/main/assets/warehouse_sample_2.jpeg",
            },
            {"type": "text", "text": "What is the height of the man in the red hat in feet?"},
        ],
    }
]

# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

GGUF

Or run SpaceQwen2.5-VL-3B-Instruct using llama.cpp:

./llama-qwen2vl-cli -m /path/to/SpaceQwen2.5-VL-3B-Instruct/SpaceQwen2.5-VL-3B-Instruct-F16.gguf \
                    --mmproj /path/to/SpaceQwen2.5-VL-3B-Instruct/spaceqwen2.5-vl-3b-instruct-vision.gguf \
                    -p "What's the height of the man in the red hat?" \
                    --image /path/to/warehouse_sample_2.jpeg --threads 24 -ngl 99

Dataset & Training

SpaceQwen2.5-VL-3B-Instruct uses LoRA to fine-tune Qwen2.5-VL-3B-Instruct on the OpenSpaces dataset.

Dataset Summary:

  • ~10k synthetic spatial reasoning traces

  • Question types: spatial relations (distances (units), above, left-of, contains, closest to)

  • Format: image (RGB) + question + answer

  • Dataset: OpenSpaces

  • Code: VQASynth

  • Reference: SpatialVLM

Scripts for LoRA SFT available at trl

Model Evaluation

SpatialScore

SpaceQwen shines in the 3D positional relations categories of the SpatialScore-Hard comparison featured in the table below:

image/png

Read more about the comprehensive spatial reasoning benchmark: SpatialScore.

The following chart compares performance between SpaceQwen and SpaceThinker on the SpatialScore benchmarks sources.

SpaceQwen_v_SpaceThinker

OmniSpatial

OmniSpatial is another comprehensive spatial reasoning benchmark that assesses dynamic reasoning, complex spatial logic, spatial interaction, and perspective-taking capabilities. image/png

Learn more about OmniSpatial.

SpaCE-10

Model Overall EQ SQ SA OO OS EP FR SP Source
InternVL2.5-4B 36.01 34.30 34.40 43.60 44.40 16.50 31.10 50.10 33.70 Table
SpaceThinker 32.72 32.73 24.81 47.26 50.33 33.63 9.25 37.54 26.25 GPT Eval
SpaceOm 32.32 32.47 24.81 47.63 50.00 32.52 9.12 37.04 25.00 GPT Eval
SpaceQwen 31.98 31.19 25.89 41.61 51.98 35.18 10.97 36.54 22.50 GPT Eval
Qwen2.5-VL-3B-Instruct 30.00 31.70 45.50 39.00 43.00 25.30 11.50 22.80 21.20 Table

Legend:

  • EQ: Entity Quantification
  • SQ: Scene Quantification
  • SA: Size Assessment
  • OO: Object-Object spatial relations
  • OS: Object-Scene spatial relations
  • EP: Entity Presence
  • FR: Functional Reasoning
  • SP: Spatial Planning

ℹ️ Note: Scores for SpaceQwen, SpaceThinker, SpaceOm are generated via gpt_eval_score on single-choice (*-single) versions of the SpaCE-10 benchmark tasks. Other entries reflect leaderboard accuracy scores from the official SpaCE-10 evaluation table.

Read more about the SpaCE-10 benchmark or see results here

⚠️ Limitations & Ethical Considerations

  • Performance may degrade in cluttered environments or camera perspective.
  • This model was fine-tuned using synthetic reasoning over an internet image dataset.
  • Multimodal biases inherent to the base model (Qwen2.5-VL) may persist.
  • Not intended for use in safety-critical or legal decision-making.

Users are encouraged to evaluate outputs critically and consider fine-tuning for domain-specific safety and performance.

Citation

@article{chen2024spatialvlm,
  title = {SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities},
  author = {Chen, Boyuan and Xu, Zhuo and Kirmani, Sean and Ichter, Brian and Driess, Danny and Florence, Pete and Sadigh, Dorsa and Guibas, Leonidas and Xia, Fei},
  journal = {arXiv preprint arXiv:2401.12168},
  year = {2024},
  url = {https://arxiv.org/abs/2401.12168},
}

@misc{qwen2.5-VL,
    title = {Qwen2.5-VL},
    url = {https://qwenlm.github.io/blog/qwen2.5-vl/},
    author = {Qwen Team},
    month = {January},
    year = {2025}
}

@article{wu2025spatialscore,
    author    = {Wu, Haoning and Huang, Xiao and Chen, Yaohui and Zhang, Ya and Wang, Yanfeng and Xie, Weidi},
    title     = {SpatialScore: Towards Unified Evaluation for Multimodal Spatial Understanding},
    journal   = {arXiv preprint arXiv:2505.17012},
    year      = {2025},
}

@article{omnispatial25,
  title   = {OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models},
  author  = {Mengdi Jia and Zekun Qi and Shaochen Zhang and Wenyao Zhang and Xinqiang Yu and Jiawei He and He Wang and Li Yi},
  journal = {arXiv preprint arXiv:2506.03135},
  year = {2025}
}
Downloads last month
90,379
Safetensors
Model size
3.75B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 2 Ask for provider support

Model tree for remyxai/SpaceQwen2.5-VL-3B-Instruct

Quantized
(33)
this model
Quantizations
2 models

Dataset used to train remyxai/SpaceQwen2.5-VL-3B-Instruct

Space using remyxai/SpaceQwen2.5-VL-3B-Instruct 1

Collections including remyxai/SpaceQwen2.5-VL-3B-Instruct

Evaluation results