File size: 2,005 Bytes
cbe7576 7198fad a049f4e b69040e a049f4e 459e4c4 a049f4e 0d24a51 a049f4e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
license: mit
language:
- en
library_name: open_clip
pipeline_tag: image-segmentation
---
<h1 align="center"> Side Adapter Network for Open-Vocabulary Semantic Segmentation</h1>
<div align="center"><a href="https://arxiv.org/abs/2302.12242">Paper</a> | <a href="https://github.com/MendelXu/SAN">Code</a> | <a href="https://huggingface.co/spaces/Mendel192/SAN-Demo">Demo</a></div>
## Model description
SAN models the semantic segmentation task as a region recognition problem. A side network is attached to a frozen CLIP model with two branches: one for predicting mask proposals, and the other for predicting attention bias which is applied in the CLIP model to recognize the class of masks. This decoupled design has the benefit CLIP in recognizing the class of mask proposals. Since the attached side network can reuse CLIP features, it can be very light. In addition, the entire network can be trained end-to-end, allowing the side network to be adapted to the frozen CLIP model, which makes the predicted mask proposals CLIP-aware.
Our approach is fast, accurate, and only adds a few additional trainable parameters. We evaluate our approach on multiple semantic segmentation benchmarks. Our method significantly outperforms other counterparts, with up to 18 times fewer trainable parameters and 19 times faster inference speed.
![](arch.png)
| Model| Pretrained Weights|
|-----------|-------------------|
| SAN-ViT-B/16| [san_vit_b_16.pth](https://huggingface.co/Mendel192/san/blob/main/san_vit_b_16.pth)|
| SAN-ViT-L/14| [san_vit_large_14.pth](https://huggingface.co/Mendel192/san/blob/main/san_vit_large_14.pth)|
## Citation
If you find it useful in your research, please cite the following paper:
```
@proceedings{xu2023side,
title={Side Adapter Network for Open-Vocabulary Semantic Segmentation},
author={Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, Xiang Bai},
journal={CVPR},
year={2023}
eprint={2302.12242},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |