Update README.md
Browse files
README.md
CHANGED
@@ -33,6 +33,7 @@ VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM
|
|
33 |
</div>
|
34 |
|
35 |
## 📰 News
|
|
|
36 |
* **[2025.6.18]** 🔥We release a new version of VideoRefer([VideoRefer-VideoLLaMA3-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-7B) and [VideoRefer-VideoLLaMA3-2B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B)), which are trained based on [VideoLLaMA3](https://github.com/DAMO-NLP-SG/VideoLLaMA3).
|
37 |
* **[2025.4.22]** 🔥Our VideoRefer-Bench has been adopted in [Describe Anything Model](https://arxiv.org/pdf/2504.16072) (NVIDIA & UC Berkeley).
|
38 |
* **[2025.2.27]** 🔥VideoRefer Suite has been accepted to CVPR2025!
|
|
|
33 |
</div>
|
34 |
|
35 |
## 📰 News
|
36 |
+
* **[2025.6.19]** 🔥We release the [demo](https://huggingface.co/spaces/lixin4ever/VideoRefer-VideoLLaMA3) of VideoRefer-VideoLLaMA3, hosted on HuggingFace. Feel free to try it!
|
37 |
* **[2025.6.18]** 🔥We release a new version of VideoRefer([VideoRefer-VideoLLaMA3-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-7B) and [VideoRefer-VideoLLaMA3-2B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B)), which are trained based on [VideoLLaMA3](https://github.com/DAMO-NLP-SG/VideoLLaMA3).
|
38 |
* **[2025.4.22]** 🔥Our VideoRefer-Bench has been adopted in [Describe Anything Model](https://arxiv.org/pdf/2504.16072) (NVIDIA & UC Berkeley).
|
39 |
* **[2025.2.27]** 🔥VideoRefer Suite has been accepted to CVPR2025!
|