File size: 7,680 Bytes
54f5840 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 |
---
license: mit
pipeline_tag: image-segmentation
library_name: transformers
---
# MLLMSeg: Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder
This repository contains the `MLLMSeg_InternVL2_5_8B_RES` model presented in the paper [Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder](https://huggingface.co/papers/2508.04107).
Reference Expression Segmentation (RES) aims to segment image regions specified by referring expressions. While Multimodal Large Language Models (MLLMs) excel in semantic understanding, their token-generation paradigm often struggles with pixel-level dense prediction. MLLMSeg addresses this by fully exploiting the inherent visual detail features encoded in the MLLM vision encoder without introducing an extra visual encoder. It proposes a detail-enhanced and semantic-consistent feature fusion module (DSFF) and establishes a light-weight mask decoder (only 34M network parameters) to optimally leverage detailed spatial features and semantic features for precise mask prediction. Extensive experiments demonstrate that MLLMSeg generally surpasses both SAM-based and SAM-free competitors, striking a better balance between performance and cost.
Code: https://github.com/jcwang0602/MLLMSeg
<p align="center">
<img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/method.png" width="800">
</p>
## Usage
You can use this model with the `transformers` library. Below is an example demonstrating how to load and use the `MLLMSeg_InternVL2_5_8B_RES` model for inference.
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
import torchvision.transforms as T
from torchvision.transforms.functional import InterpolationMode
import requests
from io import BytesIO
# Define image preprocessing utility functions
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
if image_file.startswith(('http://', 'https://')):
response = requests.get(image_file)
image = Image.open(BytesIO(response.content)).convert('RGB')
else:
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
# Load model and tokenizer
model_path = "jcwang0602/MLLMSeg_InternVL2_5_8B_RES"
model = AutoModel.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False)
# Load an example image (replace with your image path or URL)
image_path = "https://github.com/jcwang0602/MLLMSeg/raw/main/assets/res_0.png" # Example image from the repo
pixel_values = load_image(image_path, max_num=6).to(torch.bfloat16).cuda()
# Define the referring expression
question = "Please segment the person in the screenshot."
# Set generation configuration
generation_config = dict(max_new_tokens=1024, do_sample=False, temperature=0.0)
# Generate response and segmentation mask
# The output_segmentation_mask=True parameter is crucial for getting the mask directly.
response, history, pred_mask = model.chat(
tokenizer, pixel_values, question, generation_config, history=None, return_history=True, output_segmentation_mask=True
)
print(f'User: {question}\
Assistant: {response}')
# `pred_mask` will contain the predicted segmentation mask. It's a torch.Tensor.
# You can save or visualize it. For example, to save it as an image:
# from torchvision.utils import save_image
# save_image(pred_mask.float(), "segmentation_mask.png")
```
## Performance Metrics
### Referring Expression Segmentation
<img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/tab_res.png" width="800">
### Referring Expression Comprehension
<img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/tab_rec.png" width="800">
### Generalized Referring Expression Segmentation
<img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/tab_gres.png" width="800">
## Visualization
### Referring Expression Segmentation
<img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/res.png" width="800">
### Referring Expression Comprehension
<img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/rec.png" width="800">
### Generalized Referring Expression Segmentation
<img src="https://github.com/jcwang0602/MLLMSeg/raw/main/assets/gres.png" width="800">
## Citation
If our work is useful for your research, please consider citing:
```bibtex
@misc{wang2025unlockingpotentialmllmsreferring,
title={Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder},
author={Jingchao Wang and Zhijian Wu and Dingjiang Huang and Yefeng Zheng and Hong Wang},
year={2025},
eprint={2508.04107},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.04107},
}
``` |