Add library_name to metadata and include Github README content
Browse filesThis PR adds the `library_name: transformers` tag to the model card metadata.
It also includes the Github README content, improving discoverability and clarity for users.
README.md
CHANGED
@@ -1,14 +1,209 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
pipeline_tag: video-text-to-text
|
|
|
4 |
---
|
5 |
|
6 |
**<center><span style="font-size:2em;">TinyLLaVA-Video-R1</span></center>**
|
7 |
|
8 |
-
[ with 16 manually annotated samples from the NextQA dataset. It serves as the base model for [TinyLLaVA-Video-R1](https://huggingface.co/Zhang199/TinyLLaVA-Video-R1).
|
12 |
|
13 |
The 16 manually annotated samples used for cold-starting have been released [here](https://huggingface.co/datasets/Zhang199/TinyLLaVA-Video-R1-training-data).
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
pipeline_tag: video-text-to-text
|
4 |
+
library_name: transformers
|
5 |
---
|
6 |
|
7 |
**<center><span style="font-size:2em;">TinyLLaVA-Video-R1</span></center>**
|
8 |
|
9 |
+
[](https://arxiv.org/abs/2504.09641)[](https://github.com/ZhangXJ199/TinyLLaVA-Video-R1)
|
10 |
|
11 |
|
12 |
This model is obtained by cold-starting [TinyLLaVA-Video](https://huggingface.co/Zhang199/TinyLLaVA-Video-Qwen2.5-3B-Group-16-512) with 16 manually annotated samples from the NextQA dataset. It serves as the base model for [TinyLLaVA-Video-R1](https://huggingface.co/Zhang199/TinyLLaVA-Video-R1).
|
13 |
|
14 |
The 16 manually annotated samples used for cold-starting have been released [here](https://huggingface.co/datasets/Zhang199/TinyLLaVA-Video-R1-training-data).
|
15 |
|
16 |
+
<h2 align="center">TinyLLaVA-Video-R1
|
17 |
+
</a>
|
18 |
+
|
19 |
+
<h5 align="center">
|
20 |
+
<div align="center">
|
21 |
+
|
22 |
+
[Xingjian Zhang](https://scholar.google.com/citations?user=H34fwioAAAAJ&hl=zh-CN)<sup>1*</sup>,
|
23 |
+
[Siwei Wen](https://scholar.google.com/citations?user=kJRiUYwAAAAJ&hl=zh-CN)<sup>1,2*</sup>,
|
24 |
+
[Wenjun Wu](https://iai.buaa.edu.cn/info/1013/1093.htm)<sup>1,2,3</sup>,
|
25 |
+
[Lei Huang](https://huangleibuaa.github.io/)<sup>1,2,3,β</sup>
|
26 |
+
|
27 |
+
<sup>1</sup>SKLCCSE, Institute of Artificial Intelligence, Beihang University, Beijing, China<br>
|
28 |
+
<sup>2</sup>Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, Beihang University, <br>
|
29 |
+
<sup>3</sup>Hangzhou International Innovation Institute, Beihang University, Hangzhou, China
|
30 |
+
|
31 |
+
</div>
|
32 |
+
|
33 |
+
<div align="center">
|
34 |
+
|
35 |
+
[](https://arxiv.org/abs/2504.09641)
|
36 |
+
[](https://huggingface.co/Zhang199/TinyLLaVA-Video-R1)
|
37 |
+
[](https://github.com/ZhangXJ199/TinyLLaVA-Video-R1)
|
38 |
+
[](https://github.com/ZhangXJ199/TinyLLaVA-Video-R1)
|
39 |
+
|
40 |
+
</div>
|
41 |
+
|
42 |
+
## π° News
|
43 |
+
|
44 |
+
- [2025-04] π Our arXiv paper [TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning](https://arxiv.org/abs/2504.09641) is released!
|
45 |
+
- [2025-04] π Our [TinyLLaVA-Video-R1](https://github.com/ZhangXJ199/TinyLLaVA-Video-R1) repository is released!
|
46 |
+
|
47 |
+
## <img id="painting_icon" width="3%" src="https://cdn-icons-png.flaticon.com/256/2435/2435606.png"> About
|
48 |
+
**TinyLLaVA-Video-R1** is a small-scale video reasoning model built upon the fully open-source [TinyLLaVA-Video](https://github.com/ZhangXJ199/TinyLLaVA-Video) framework. Designed for researchers with limited computational resources, it leverages reinforcement learning to enhance reasoning abilities while maintaining a model size under 4B parameters. **TinyLLaVA-Video-R1** demonstrates improved video question-answering performance and reflective reasoning behaviors ("**aha moments**"). The model and training process are fully traceable, ensuring reproducibility and reliability. This repository provides the model, code, and experimental setups for easy replication.
|
49 |
+
|
50 |
+
<div align="center">
|
51 |
+
<img src="images/case.png" alt="framework" width="90%" height="auto">
|
52 |
+
</div>
|
53 |
+
|
54 |
+
## π οΈ Installation
|
55 |
+
|
56 |
+
1. Clone this repository and navigate to the folder
|
57 |
+
```bash
|
58 |
+
git clone https://github.com/ZhangXJ199/TinyLLaVA-Video-R1.git
|
59 |
+
cd TinyLLaVA-Video-R1
|
60 |
+
```
|
61 |
+
|
62 |
+
2. Create a conda environment, activate it and install Packages
|
63 |
+
```Shell
|
64 |
+
conda create -n tinyllava_video python=3.10 -y
|
65 |
+
conda activate tinyllava_video
|
66 |
+
pip install --upgrade pip # enable PEP 660 support
|
67 |
+
pip install -e .
|
68 |
+
```
|
69 |
+
|
70 |
+
3. Install additional packages
|
71 |
+
```Shell
|
72 |
+
pip install flash-attn --no-build-isolation
|
73 |
+
```
|
74 |
+
##### Upgrade to the latest code base
|
75 |
+
|
76 |
+
```Shell
|
77 |
+
git pull
|
78 |
+
pip install -e .
|
79 |
+
```
|
80 |
+
|
81 |
+
## π Usage
|
82 |
+
|
83 |
+
### Trained Model
|
84 |
+
|
85 |
+
The model we provided after training: [TinyLLaVA-Video-R1](https://huggingface.co/Zhang199/TinyLLaVA-Video-R1)
|
86 |
+
|
87 |
+
### 1. Data Preparation
|
88 |
+
We select multiple choice questions from the NextQA subset of [LLaVA-Video-178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K) as training data. To maintain manageable training time with limited computational resources, we only choose the subset of data with a duration of 0 to 30 seconds, which contains 5,496 samples. The training data can be downloaded from [here](https://huggingface.co/datasets/Zhang199/TinyLLaVA-Video-R1-training-data).
|
89 |
+
|
90 |
+
#### Organize Data
|
91 |
+
|
92 |
+
Organize the files and annotation files as follows in ``path/to/your/dataset``:
|
93 |
+
|
94 |
+
```Shell
|
95 |
+
dataset
|
96 |
+
βββ NextQA
|
97 |
+
β βββ NExTVideo
|
98 |
+
βββ nextqa_0-30s.jsonl
|
99 |
+
βββ nextqa-coldstart-16.json
|
100 |
+
```
|
101 |
+
|
102 |
+
### 2. Train
|
103 |
+
|
104 |
+
#### 1. Cold Start
|
105 |
+
|
106 |
+
**Option1**: You can directly download [TinyLLaVA-Video-ColdStart](https://huggingface.co/Zhang199/TinyLLaVA-Video-Coldstart_NextQA_16).
|
107 |
+
|
108 |
+
**Option2**: You can train the model yourself:
|
109 |
+
|
110 |
+
Replace data paths and model paths with yours in `scripts/train/train_qwen2_coldstart.sh`
|
111 |
+
|
112 |
+
```bash
|
113 |
+
bash scripts/train/train_qwen2_coldstart.sh
|
114 |
+
```
|
115 |
+
|
116 |
+
#### 2. GRPO Training
|
117 |
+
|
118 |
+
Replace data paths and output_dir with yours in `scripts/train/train_qwen2_reason_nextqa.sh`
|
119 |
+
|
120 |
+
```bash
|
121 |
+
bash scripts/train/train_qwen2_reason_nextqa.sh
|
122 |
+
```
|
123 |
+
|
124 |
+
### 3. Evaluation
|
125 |
+
|
126 |
+
We currently provide evaluations on 4 benchmarks, including [Video-MME](https://video-mme.github.io/home_page.html#leaderboard), [MVBench](https://huggingface.co/datasets/OpenGVLab/MVBench), [MLVU](https://github.com/JUNJIE99/MLVU), [MMVU](https://github.com/yale-nlp/MMVU).
|
127 |
+
|
128 |
+
#### Video-MME
|
129 |
+
|
130 |
+
1. Download [Video-MME](https://huggingface.co/datasets/lmms-lab/Video-MME) and put it under ``path/to/your/dataset/eval/Video-MME``.
|
131 |
+
2. Please change ``MODEL_PATH``, ``MODEL_NAME``, ``EVAL_DIR``, ``conv-mode`` and ``duration`` in ``scripts/eval/videomme.sh``. There are three types of ``duration`` available for testing: ``short``, ``medium``, and ``long``.
|
132 |
+
3. Please use the following command for single-gpu inference.
|
133 |
+
```bash
|
134 |
+
CUDA_VISIBLE_DEVICES=0 bash scripts/eval/videomme.sh
|
135 |
+
```
|
136 |
+
|
137 |
+
#### MVBench
|
138 |
+
|
139 |
+
1. Download [MVBench](https://huggingface.co/datasets/OpenGVLab/MVBench) and put it under ``path/to/your/dataset/eval/MVBench``.
|
140 |
+
2. Please change ``MODEL_PATH``, ``MODEL_NAME``, ``EVAL_DIR`` and ``conv-mode`` in ``scripts/eval/mvbench.sh``.
|
141 |
+
3. Please use the following command for single-gpu inference.
|
142 |
+
```bash
|
143 |
+
CUDA_VISIBLE_DEVICES=0 bash scripts/eval/mvbench.sh
|
144 |
+
```
|
145 |
+
|
146 |
+
#### MLVU
|
147 |
+
|
148 |
+
1. Download [MLVU](https://huggingface.co/datasets/MLVU/MVLU) and put it under ``path/to/your/dataset/eval/MLVU``.
|
149 |
+
2. Please change ``MODEL_PATH``, ``MODEL_NAME``, ``EVAL_DIR`` and ``conv-mode`` in ``scripts/eval/mlvu.sh``.
|
150 |
+
3. Please use the following command for single-gpu inference.
|
151 |
+
```bash
|
152 |
+
CUDA_VISIBLE_DEVICES=0 bash scripts/eval/mlvu.sh
|
153 |
+
```
|
154 |
+
|
155 |
+
#### MMVU
|
156 |
+
|
157 |
+
1. Download [MMVU](https://huggingface.co/datasets/yale-nlp/MMVU) and put it under ``path/to/your/dataset/eval/MMVU``.
|
158 |
+
2. Please change ``MODEL_PATH``, ``MODEL_NAME``, ``EVAL_DIR`` and ``conv-mode`` in ``scripts/eval/mmvu.sh``.
|
159 |
+
3. Please use the following command for single-gpu inference.
|
160 |
+
```bash
|
161 |
+
CUDA_VISIBLE_DEVICES=0 bash scripts/eval/mmvu.sh
|
162 |
+
|
163 |
+
### Quick Inference Scripts
|
164 |
+
|
165 |
+
1. Please change ``model_path``, ``prompt`` and ``video_file`` in ``eval.py``.
|
166 |
+
2. Please use the following command for single-gpu inference.
|
167 |
+
```bash
|
168 |
+
CUDA_VISIBLE_DEVICES=0 python eval.py
|
169 |
+
```
|
170 |
+
|
171 |
+
## π Results
|
172 |
+
The performance of **TinyLLaVA-Video-R1** on multiple benchmarks. "Option" indicates that the model only needs to answer with the selected choice, while "Reason" means the model must output both the answer and the reasoning process according to the format requirements. Here, MMVU is categorized as a video reasoning benchmark, the remaining benchmarks are designed for general-purpose video evaluation. The best results are indicated by boldface.
|
173 |
+
|
174 |
+
<div align="center">
|
175 |
+
<img src="images/result.jpg" alt="framework" width="75%" height="auto">
|
176 |
+
</div>
|
177 |
+
|
178 |
+
The performance of **TinyLLaVA-Video-R1** is significantly higher than TinyLLaVA-Video-ColdStart, especially in benchmarks that test reasoning abilities such as MMVU. Moreover, it outperforms TinyLLaVA-Video-SFT across all benchmarks, highlighting the effectiveness of the reinforcement learning approach employed.
|
179 |
+
|
180 |
+
## <img id="painting_icon" width="3%" src="https://cdn-icons-png.flaticon.com/256/3176/3176298.png"> Aha Moment
|
181 |
+
|
182 |
+
**TinyLLaVA-Video-R1** exhibits "aha moments" where it revisits and refines its initial reasoning. As shown in the image below, the model self-corrects by evaluating different options and improving its responses, which enhances accuracy and interpretability. This reflective behavior distinguishes it from traditional models, offering greater transparency in the reasoning process.
|
183 |
+
|
184 |
+
<div align="center">
|
185 |
+
<img src="images/aha_moment.jpg" alt="framework" width="90%" height="auto">
|
186 |
+
</div>
|
187 |
+
|
188 |
+
## π Citation
|
189 |
+
|
190 |
+
If you find our work interesting and helpful, please consider giving our repo a star. Additionally, if you would like to cite our work, please use the following format:
|
191 |
+
```bibtex
|
192 |
+
@misc{zhang2025tinyllavavideor1smallerlmmsvideo,
|
193 |
+
title={TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning},
|
194 |
+
author={Xingjian Zhang and Siwei Wen and Wenjun Wu and Lei Huang},
|
195 |
+
year={2025},
|
196 |
+
eprint={2504.09641},
|
197 |
+
archivePrefix={arXiv},
|
198 |
+
primaryClass={cs.CV},
|
199 |
+
url={https://arxiv.org/abs/2504.09641},
|
200 |
+
}
|
201 |
+
```
|
202 |
+
|
203 |
+
## π¨ Contact
|
204 |
+
|
205 |
+
If you have any questions or suggestions, please feel free to contact us at ``[email protected]``.
|
206 |
+
|
207 |
+
## β€οΈ Community efforts
|
208 |
+
* This repository is based on [TinyLLaVA-Video](https://github.com/ZhangXJ199/TinyLLaVA-Video) project.
|
209 |
+
* The implementation of the GRPO algorithm refers to the [open-r1-multimodal](https://github.com/EvolvingLMMs-Lab/open-r1-multimodal) project. Great work!
|