Robotics
Transformers
Safetensors
llava_llama

🌏 RoboRefer

HomePage arXiv Project Homepage

Dataset Benchmark Weights

This is the official checkpoint of our work: RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics

Overview

RoboRefer-2B-Align is an open-source vision-language model trained on the RefSpatial datasets for depth alignment.

Resources for More Information

Date

This model was trained in June 2025.

πŸ“ Citation

If you find our code or models useful in your work, please cite our paper:

@article{zhou2025roborefer,
    title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
    author={Zhou, Enshen and An, Jingkun and Chi, Cheng and Han, Yi and Rong, Shanyu and Zhang, Chi and Wang, Pengwei and Wang, Zhongyuan and Huang, Tiejun and Sheng, Lu and others},
    journal={arXiv preprint arXiv:2506.04308},
    year={2025}
}
Downloads last month
3
Video Preview
loading

Model tree for Zhoues/RoboRefer-2B-Depth-Align

Finetuned
(3)
this model

Collection including Zhoues/RoboRefer-2B-Depth-Align