metadata
license: apache-2.0
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: video-text-to-text
tags:
- multimodal large language model
- large video-language model
base_model:
- DAMO-NLP-SG/VideoLLaMA3-2B-Image
VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM
If you like our project, please give us a star ⭐ on Github for the latest update.
📰 News
- [2025.6.19] 🔥We release the demo of VideoRefer-VideoLLaMA3, hosted on HuggingFace. Feel free to try it!
- [2025.6.18] 🔥We release a new version of VideoRefer(VideoRefer-VideoLLaMA3-7B and VideoRefer-VideoLLaMA3-2B), which are trained based on VideoLLaMA3.
- [2025.4.22] 🔥Our VideoRefer-Bench has been adopted in Describe Anything Model (NVIDIA & UC Berkeley).
- [2025.2.27] 🔥VideoRefer Suite has been accepted to CVPR2025!
- [2025.2.18] 🔥We release the VideoRefer-700K dataset on HuggingFace.
- [2025.1.1] 🔥We release VideoRefer-7B, the code of VideoRefer and the VideoRefer-Bench.
🌏 Model Zoo
📑 Citation
If you find VideoRefer Suite useful for your research and applications, please cite using this BibTeX:
@InProceedings{Yuan_2025_CVPR,
author = {Yuan, Yuqian and Zhang, Hang and Li, Wentong and Cheng, Zesen and Zhang, Boqiang and Li, Long and Li, Xin and Zhao, Deli and Zhang, Wenqiao and Zhuang, Yueting and Zhu, Jianke and Bing, Lidong},
title = {VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {18970-18980}
}
@article{damonlpsg2025videollama3,
title={VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding},
author={Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao},
journal={arXiv preprint arXiv:2501.13106},
year={2025},
url = {https://arxiv.org/abs/2501.13106}
}