Safetensors
qwen2_5_vl
Senqiao commited on
Commit
eea8fa9
·
verified ·
1 Parent(s): e5c5b63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -1
README.md CHANGED
@@ -5,4 +5,76 @@ datasets:
5
  - Senqiao/VisionThink-General-Val
6
  base_model:
7
  - Qwen/Qwen2.5-VL-7B-Instruct
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - Senqiao/VisionThink-General-Val
6
  base_model:
7
  - Qwen/Qwen2.5-VL-7B-Instruct
8
+ ---
9
+
10
+
11
+ <p align="center" width="100%">
12
+ <img src="https://raw.githubusercontent.com/dvlab-research/VisionThink/main/files/VisionThink.jpg" alt="Stanford-Alpaca" style="width: 100%; min-width: 300px; display: block; margin: auto;">
13
+ </p>
14
+
15
+
16
+ # VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning
17
+
18
+
19
+ [![Paper](https://img.shields.io/badge/Paper-Arxiv%20Link-light)](https://arxiv.org/abs/2507.13348)
20
+ [![HF](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Discussion-orange)](https://huggingface.co/papers/2507.13348)
21
+ [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-yellow.svg)](https://github.com/dvlab-research/VisionThink/blob/main/LICENSE)
22
+ <a href='https://huggingface.co/collections/Senqiao/visionthink-6878d839fae02a079c9c7bfe'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Data%20Model-Collection-red'></a>
23
+
24
+
25
+ ## Senqiao/VisionThink-General
26
+
27
+ This model is trained via reinforcement learning using [`Senqiao/VisionThink-General-Train`](https://huggingface.co/datasets/Senqiao/VisionThink-General-Train), demonstrating enhanced performance on general VQA tasks.
28
+
29
+
30
+ **VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning [[Paper](https://arxiv.org/abs/2507.13348)]** <br />
31
+ [Senqiao Yang](https://scholar.google.com/citations?user=NcJc-RwAAAAJ),
32
+ [Junyi Li](https://scholar.google.com/citations?hl=zh-CN&user=zQ0P3JAAAAAJ),
33
+ [Xin Lai](https://scholar.google.com/citations?user=tqNDPA4AAAAJ),
34
+ [Bei Yu](https://scholar.google.com/citations?user=tGneTm4AAAAJ),
35
+ [Hengshuang Zhao](https://scholar.google.com/citations?user=4uE10I0AAAAJ),
36
+ [Jiaya Jia](https://scholar.google.com/citations?user=XPAkzTEAAAAJ)<br />
37
+
38
+
39
+ ## Highlights
40
+ <p align="center" width="80%">
41
+ <img src="https://raw.githubusercontent.com/dvlab-research/VisionThink/main/files/Framework.jpg" alt="Stanford-Alpaca" style="width: 80%; min-width: 300px; display: block; margin: auto;">
42
+ </p>
43
+
44
+ 1. Our VisionThink leverages reinforcement learning to **autonomously** learn whether to reduce visual tokens. Compared to traditional efficient VLM approaches, our method achieves significant improvements on **fine-grained** benchmarks, such as those involving OCR-related tasks.
45
+
46
+ 2. VisionThink improves performance on **General VQA** tasks while reducing visual tokens by **50%**, achieving **102%** of the original model’s performance across nine benchmarks.
47
+
48
+ 3. VisionThink achieves strong performance and efficiency by simply resizing input images to reduce visual tokens. We hope this inspires further research into **Efficient Reasoning Vision Language Models**.
49
+
50
+ ## Video
51
+ <p align="center" width="85%">
52
+ <a href="https://www.youtube.com/watch?v=DGjbFbA5mBw" target="_blank">
53
+ <img src="https://raw.githubusercontent.com/dvlab-research/VisionThink/main/files/Video.png" alt="Stanford-Alpaca" style="width: 70%; min-width: 300px; display: block; margin: auto;">
54
+ </a>
55
+ </p>
56
+
57
+
58
+
59
+ ## Citation
60
+
61
+ If you find this project useful in your research, please consider citing:
62
+
63
+ > This work is highly motivated by our previous effort on efficient VLMs, [**VisionZip**](https://github.com/dvlab-research/VisionZip), which explores token compression for faster inference.
64
+
65
+ ```
66
+ @article{yang2025visionthink,
67
+ title={VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning},
68
+ author={Yang, Senqiao and Li, Junyi and Lai, Xin and Yu, Bei and Zhao, Hengshuang and Jia, Jiaya},
69
+ journal={arXiv preprint arXiv:2507.13348},
70
+ year={2025}
71
+ }
72
+ @article{yang2024visionzip,
73
+ title={VisionZip: Longer is Better but Not Necessary in Vision Language Models},
74
+ author={Yang, Senqiao and Chen, Yukang and Tian, Zhuotao and Wang, Chengyao and Li, Jingyao and Yu, Bei and Jia, Jiaya},
75
+ journal={arXiv preprint arXiv:2412.04467},
76
+ year={2024}
77
+ }
78
+ ```
79
+
80
+