Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: image-text-to-text
|
4 |
+
---
|
5 |
+
|
6 |
+
**<center><span style="font-size:2em;">TinyLLaVA-Video-R1</span></center>**
|
7 |
+
|
8 |
+
[](https://github.com/ZhangXJ199/TinyLLaVA-Video-R1)[](https://github.com/ZhangXJ199/TinyLLaVA-Video-R1)
|
9 |
+
|
10 |
+
|
11 |
+
This model is obtained by cold-starting [TinyLLaVA-Video]((https://huggingface.co/Zhang199/TinyLLaVA-Video-Qwen2.5-3B-Group-16-512)) with 16 manually annotated samples from the NextQA dataset. It serves as the base model for [TinyLLaVA-Video-R1](https://huggingface.co/Zhang199/TinyLLaVA-Video-R1).
|
12 |
+
|
13 |
+
The 16 manually annotated samples used for cold-starting have been released [here](https://huggingface.co/datasets/Zhang199/TinyLLaVA-Video-R1-training-data).
|
14 |
+
|