---
license: apache-2.0
pipeline_tag: video-text-to-text
library_name: transformers
---
**
TinyLLaVA-Video-R1**
[](https://arxiv.org/abs/2504.09641)[](https://github.com/ZhangXJ199/TinyLLaVA-Video-R1)
This model is obtained by cold-starting [TinyLLaVA-Video](https://huggingface.co/Zhang199/TinyLLaVA-Video-Qwen2.5-3B-Group-16-512) with 16 manually annotated samples from the NextQA dataset. It serves as the base model for [TinyLLaVA-Video-R1](https://huggingface.co/Zhang199/TinyLLaVA-Video-R1).
The 16 manually annotated samples used for cold-starting have been released [here](https://huggingface.co/datasets/Zhang199/TinyLLaVA-Video-R1-training-data).