Ricky06662 nielsr HF Staff commited on
Commit
d27f35b
·
verified ·
1 Parent(s): 9e85977

Improve model card for VisionReasoner-7B (Seg-Zero framework) (#2)

Browse files

- Improve model card for VisionReasoner-7B (Seg-Zero framework) (99e8a6b6cd819c0727f49ba0edb9d79dd70d5c24)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +54 -15
README.md CHANGED
@@ -1,33 +1,50 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
- - COCO
5
- - ReasonSeg
6
- - CountBench
 
 
7
  language:
8
- - en
9
- metrics:
10
- - accuracy
11
- base_model:
12
- - Qwen2.5-VL
13
- pipeline_tag: image-text-to-text
14
  library_name: transformers
 
 
 
 
15
  ---
16
 
17
- # VisionReasoner-7B
18
 
19
- [Paper](https://huggingface.co/papers/2505.12081)
20
 
21
- Code: [https://github.com/dvlab-research/VisionReasoner](https://github.com/dvlab-research/VisionReasoner)
 
22
 
23
- Project page: [https://github.com/dvlab-research/VisionReasoner](https://github.com/dvlab-research/VisionReasoner)
 
 
24
 
25
  ## Description
26
 
27
- This is a VisionReasoner-7B model. It introduces a decoupled architecture consisting of a reasoning model and a segmentation model. The reasoning model interprets user intentions, generates explicit reasoning chains, and produces positional prompts, which are subsequently used by the segmentation model to generate pixel-level masks.
 
 
 
 
 
 
 
 
 
 
28
 
29
  ## Usage
30
 
 
 
31
  ```python
32
  from transformers import AutoModelForCausalLM, AutoTokenizer
33
  import torch
@@ -35,4 +52,26 @@ import torch
35
  # load model
36
  model = AutoModelForCausalLM.from_pretrained("Ricky06662/VisionReasoner-7B")
37
  tokenizer = AutoTokenizer.from_pretrained("Ricky06662/VisionReasoner-7B")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  ```
 
1
  ---
2
+ base_model:
3
+ - Qwen2.5-VL
4
  datasets:
5
+ - COCO
6
+ - ReasonSeg
7
+ - CountBench
8
+ - Ricky06662/refCOCOg_9k_840
9
+ - Ricky06662/VisionReasoner_multi_object_7k_840
10
  language:
11
+ - en
 
 
 
 
 
12
  library_name: transformers
13
+ license: apache-2.0
14
+ metrics:
15
+ - accuracy
16
+ pipeline_tag: image-segmentation
17
  ---
18
 
19
+ # VisionReasoner-7B from the Seg-Zero Framework
20
 
21
+ This repository contains the **VisionReasoner-7B** model, developed as part of the novel **Seg-Zero** framework, presented in the paper [Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement](https://huggingface.co/papers/2503.06520). This model is also associated with the paper [VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning](https://huggingface.co/papers/2505.12081).
22
 
23
+ Code: [https://github.com/dvlab-research/Seg-Zero](https://github.com/dvlab-research/Seg-Zero)
24
+ Project page: [https://github.com/dvlab-research/Seg-Zero](https://github.com/dvlab-research/Seg-Zero)
25
 
26
+ <div align="center">
27
+ <img width="98%" src="https://raw.githubusercontent.com/dvlab-research/Seg-Zero/main/assets/overview.png"/>
28
+ </div>
29
 
30
  ## Description
31
 
32
+ **Seg-Zero** is a novel framework that demonstrates remarkable generalizability and derives explicit chain-of-thought reasoning through cognitive reinforcement for reasoning segmentation. This **VisionReasoner-7B** model employs a decoupled architecture consisting of a reasoning model and a segmentation model. The reasoning model interprets user intentions, generates explicit reasoning chains, and produces positional prompts, which are subsequently used by the segmentation model to generate precise pixel-level masks.
33
+
34
+ <div align="center">
35
+ <img width="98%" src="https://raw.githubusercontent.com/dvlab-research/Seg-Zero/main/assets/pipeline.png"/>
36
+ </div>
37
+
38
+ Trained exclusively via reinforcement learning with GRPO and without explicit reasoning data, Seg-Zero achieves robust zero-shot generalization and exhibits emergent test-time reasoning capabilities. Experiments show that Seg-Zero-7B achieves a zero-shot performance of 57.5 on the ReasonSeg benchmark, surpassing the prior LISA-7B by 18%. This significant improvement highlights Seg-Zero's ability to generalize across domains while presenting an explicit reasoning process.
39
+
40
+ <div align="center">
41
+ <img width="98%" src="https://raw.githubusercontent.com/dvlab-research/Seg-Zero/main/assets/examples.png"/>
42
+ </div>
43
 
44
  ## Usage
45
 
46
+ You can load and use this model with the `transformers` library:
47
+
48
  ```python
49
  from transformers import AutoModelForCausalLM, AutoTokenizer
50
  import torch
 
52
  # load model
53
  model = AutoModelForCausalLM.from_pretrained("Ricky06662/VisionReasoner-7B")
54
  tokenizer = AutoTokenizer.from_pretrained("Ricky06662/VisionReasoner-7B")
55
+ ```
56
+
57
+ For full inference examples, including image processing and input formatting, please refer to the project's GitHub repository.
58
+
59
+ ## Citation
60
+
61
+ If you find our work helpful or inspiring, please feel free to cite our papers:
62
+
63
+ ```bibtex
64
+ @article{liu2025segzero,
65
+ title = {Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement},
66
+ author = {Liu, Yuqi and Peng, Bohao and Zhong, Zhisheng and Yue, Zihao and Lu, Fanbin and Yu, Bei and Jia, Jiaya},
67
+ journal = {arXiv preprint arXiv:2503.06520},
68
+ year = {2025}
69
+ }
70
+
71
+ @article{liu2025visionreasoner,
72
+ title = {VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning},
73
+ author = {Liu, Yuqi and Qu, Tianyuan and Zhong, Zhisheng and Peng, Bohao and Liu, Shu and Yu, Bei and Jia, Jiaya},
74
+ journal = {arXiv preprint arXiv:2505.12081},
75
+ year = {2025}
76
+ }
77
  ```