jcwang0602 nielsr HF Staff commited on
Commit
54f5840
·
verified ·
1 Parent(s): 5694a13

Improve model card for MLLMSeg: Add metadata, abstract, and usage example (#1)

Browse files

- Improve model card for MLLMSeg: Add metadata, abstract, and usage example (5bd60312454c0e21f3264af59a5e4468f60991b3)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +176 -3
README.md CHANGED
@@ -1,3 +1,176 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: image-segmentation
4
+ library_name: transformers
5
+ ---
6
+
7
+ # MLLMSeg: Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder
8
+
9
+ This repository contains the `MLLMSeg_InternVL2_5_8B_RES` model presented in the paper [Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder](https://huggingface.co/papers/2508.04107).
10
+
11
+ Reference Expression Segmentation (RES) aims to segment image regions specified by referring expressions. While Multimodal Large Language Models (MLLMs) excel in semantic understanding, their token-generation paradigm often struggles with pixel-level dense prediction. MLLMSeg addresses this by fully exploiting the inherent visual detail features encoded in the MLLM vision encoder without introducing an extra visual encoder. It proposes a detail-enhanced and semantic-consistent feature fusion module (DSFF) and establishes a light-weight mask decoder (only 34M network parameters) to optimally leverage detailed spatial features and semantic features for precise mask prediction. Extensive experiments demonstrate that MLLMSeg generally surpasses both SAM-based and SAM-free competitors, striking a better balance between performance and cost.
12
+
13
+ Code: https://github.com/jcwang0602/MLLMSeg
14
+
15
+ <p align="center">
16
+ <img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/method.png" width="800">
17
+ </p>
18
+
19
+ ## Usage
20
+
21
+ You can use this model with the `transformers` library. Below is an example demonstrating how to load and use the `MLLMSeg_InternVL2_5_8B_RES` model for inference.
22
+
23
+ ```python
24
+ import torch
25
+ from PIL import Image
26
+ from transformers import AutoModel, AutoTokenizer
27
+ import torchvision.transforms as T
28
+ from torchvision.transforms.functional import InterpolationMode
29
+ import requests
30
+ from io import BytesIO
31
+
32
+ # Define image preprocessing utility functions
33
+ IMAGENET_MEAN = (0.485, 0.456, 0.406)
34
+ IMAGENET_STD = (0.229, 0.224, 0.225)
35
+
36
+ def build_transform(input_size):
37
+ MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
38
+ transform = T.Compose([
39
+ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
40
+ T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
41
+ T.ToTensor(),
42
+ T.Normalize(mean=MEAN, std=STD)
43
+ ])
44
+ return transform
45
+
46
+ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
47
+ best_ratio_diff = float('inf')
48
+ best_ratio = (1, 1)
49
+ area = width * height
50
+ for ratio in target_ratios:
51
+ target_aspect_ratio = ratio[0] / ratio[1]
52
+ ratio_diff = abs(aspect_ratio - target_aspect_ratio)
53
+ if ratio_diff < best_ratio_diff:
54
+ best_ratio_diff = ratio_diff
55
+ best_ratio = ratio
56
+ elif ratio_diff == best_ratio_diff:
57
+ if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
58
+ best_ratio = ratio
59
+ return best_ratio
60
+
61
+ def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
62
+ orig_width, orig_height = image.size
63
+ aspect_ratio = orig_width / orig_height
64
+
65
+ target_ratios = set(
66
+ (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
67
+ i * j <= max_num and i * j >= min_num)
68
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
69
+
70
+ target_aspect_ratio = find_closest_aspect_ratio(
71
+ aspect_ratio, target_ratios, orig_width, orig_height, image_size)
72
+
73
+ target_width = image_size * target_aspect_ratio[0]
74
+ target_height = image_size * target_aspect_ratio[1]
75
+ blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
76
+
77
+ resized_img = image.resize((target_width, target_height))
78
+ processed_images = []
79
+ for i in range(blocks):
80
+ box = (
81
+ (i % (target_width // image_size)) * image_size,
82
+ (i // (target_width // image_size)) * image_size,
83
+ ((i % (target_width // image_size)) + 1) * image_size,
84
+ ((i // (target_width // image_size)) + 1) * image_size
85
+ )
86
+ split_img = resized_img.crop(box)
87
+ processed_images.append(split_img)
88
+ assert len(processed_images) == blocks
89
+ if use_thumbnail and len(processed_images) != 1:
90
+ thumbnail_img = image.resize((image_size, image_size))
91
+ processed_images.append(thumbnail_img)
92
+ return processed_images
93
+
94
+ def load_image(image_file, input_size=448, max_num=12):
95
+ if image_file.startswith(('http://', 'https://')):
96
+ response = requests.get(image_file)
97
+ image = Image.open(BytesIO(response.content)).convert('RGB')
98
+ else:
99
+ image = Image.open(image_file).convert('RGB')
100
+
101
+ transform = build_transform(input_size=input_size)
102
+ images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
103
+ pixel_values = [transform(image) for image in images]
104
+ pixel_values = torch.stack(pixel_values)
105
+ return pixel_values
106
+
107
+ # Load model and tokenizer
108
+ model_path = "jcwang0602/MLLMSeg_InternVL2_5_8B_RES"
109
+ model = AutoModel.from_pretrained(
110
+ model_path,
111
+ torch_dtype=torch.bfloat16,
112
+ low_cpu_mem_usage=True,
113
+ trust_remote_code=True
114
+ ).eval().cuda()
115
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
116
+
117
+ # Load an example image (replace with your image path or URL)
118
+ image_path = "https://github.com/jcwang0602/MLLMSeg/raw/main/assets/res_0.png" # Example image from the repo
119
+ pixel_values = load_image(image_path, max_num=6).to(torch.bfloat16).cuda()
120
+
121
+ # Define the referring expression
122
+ question = "Please segment the person in the screenshot."
123
+
124
+ # Set generation configuration
125
+ generation_config = dict(max_new_tokens=1024, do_sample=False, temperature=0.0)
126
+
127
+ # Generate response and segmentation mask
128
+ # The output_segmentation_mask=True parameter is crucial for getting the mask directly.
129
+ response, history, pred_mask = model.chat(
130
+ tokenizer, pixel_values, question, generation_config, history=None, return_history=True, output_segmentation_mask=True
131
+ )
132
+
133
+ print(f'User: {question}\
134
+ Assistant: {response}')
135
+ # `pred_mask` will contain the predicted segmentation mask. It's a torch.Tensor.
136
+ # You can save or visualize it. For example, to save it as an image:
137
+ # from torchvision.utils import save_image
138
+ # save_image(pred_mask.float(), "segmentation_mask.png")
139
+ ```
140
+
141
+ ## Performance Metrics
142
+
143
+ ### Referring Expression Segmentation
144
+ <img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/tab_res.png" width="800">
145
+
146
+ ### Referring Expression Comprehension
147
+ <img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/tab_rec.png" width="800">
148
+
149
+ ### Generalized Referring Expression Segmentation
150
+ <img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/tab_gres.png" width="800">
151
+
152
+ ## Visualization
153
+ ### Referring Expression Segmentation
154
+ <img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/res.png" width="800">
155
+
156
+ ### Referring Expression Comprehension
157
+ <img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/rec.png" width="800">
158
+
159
+ ### Generalized Referring Expression Segmentation
160
+ <img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/gres.png" width="800">
161
+
162
+ ## Citation
163
+
164
+ If our work is useful for your research, please consider citing:
165
+
166
+ ```bibtex
167
+ @misc{wang2025unlockingpotentialmllmsreferring,
168
+ title={Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder},
169
+ author={Jingchao Wang and Zhijian Wu and Dingjiang Huang and Yefeng Zheng and Hong Wang},
170
+ year={2025},
171
+ eprint={2508.04107},
172
+ archivePrefix={arXiv},
173
+ primaryClass={cs.CV},
174
+ url={https://arxiv.org/abs/2508.04107},
175
+ }
176
+ ```