Matrix53 commited on
Commit
64e66b9
·
verified ·
1 Parent(s): 3e55ced

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md CHANGED
@@ -4,3 +4,38 @@ language:
4
  - en
5
  ---
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - en
5
  ---
6
 
7
+ <h1 align='center'>ELBO-T2IAlign: A Generic ELBO-Based Method for Calibrating Pixel-level Text-Image Alignment in Diffusion Models</h1>
8
+ <p align="center"> <span style="color:#137cf3; font-family: Gill Sans">Qin Zhou</span>, <span style="color:#137cf3; font-family: Gill Sans">Zhiyang Zhang</span>, <span style="color:#137cf3; font-family: Gill Sans">Jinglong Wang</span>, <span style="color:#137cf3; font-family: Gill Sans">Xiaobin Li</span>, <span style="color:#137cf3; font-family: Gill Sans">Jing Zhang</span><sup>*</sup>, <span style="color:#137cf3; font-family: Gill Sans">Qian Yu</span>, <span style="color:#137cf3; font-family: Gill Sans">Lu Sheng</span>, <span style="color:#137cf3; font-family: Gill Sans">Dong Xu</span> <br>
9
+ <span style="font-size: 16px">Beihang University</span>, <span style="font-size: 16px">University of Hong Kong</span></p>
10
+
11
+ <div align="center">
12
+ <a href="https://vcg-team.github.io/elbo-t2ialign-webpage/"><img src="https://img.shields.io/static/v1?label=elbo-t2ialign&message=Project&color=purple"></a>  
13
+ <a href="https://arxiv.org/abs/2506.09740"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv&color=red&logo=arxiv"></a>  
14
+ <a href="https://github.com/VCG-team/elbo-t2ialign"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a>  
15
+ <a href="https://huggingface.co/datasets/Matrix53/elbo-t2ialign"><img src="https://img.shields.io/static/v1?label=Dataset&message=HuggingFace&color=yellow&logo=huggingface"></a>
16
+ </div>
17
+
18
+ ## Abstract
19
+
20
+ Diffusion models excel at image generation. Recent studies have shown that these models not only generate high quality images but also encode text-image alignment information through attention maps or loss functions. This information is valuable for various downstream tasks, including segmentation, text-guided image editing, and compositional image generation. However, current methods heavily rely on the assumption of perfect text-image alignment in diffusion models, which is not the case. In this paper, we propose using zero-shot referring image segmentation as a proxy task to evaluate the pixel-level image and class-level text alignment of popular diffusion models. We conduct an in-depth analysis of pixel-text misalignment in diffusion models from the perspective of training data bias. We find that misalignment occurs in images with small sized, occluded, or rare object classes. Therefore, we propose ELBO-T2IAlign—a simple yet effective method to calibrate pixel-text alignment in diffusion models based on the evidence lower bound (ELBO) of likelihood. Our method is training-free and generic, eliminating the need to identify the specific cause of misalignment and works well across various diffusion model architectures. Extensive experiments on commonly used benchmark datasets on image segmentation and generation have verified the effectiveness of our proposed calibration approach.
21
+
22
+ ## Details
23
+
24
+ This repository contains all datasets used in our paper, including COCO, VOC, Context...
25
+
26
+ Only validation sets are included, which cost about 7G memory.
27
+ ```bash
28
+ # unzip command
29
+ cat dataset.tar.gz.* > dataset.tar.gz
30
+ tar -xzf dataset.tar.gz
31
+ ```
32
+
33
+ ## Citation
34
+ ```bibtex
35
+ @article{zhou2025elbo,
36
+ title={ELBO-T2IAlign: A Generic ELBO-Based Method for Calibrating Pixel-level Text-Image Alignment in Diffusion Models},
37
+ author={Zhou, Qin and Zhang, Zhiyang and Wang, Jinglong and Li, Xiaobin and Zhang, Jing and Yu, Qian and Sheng, Lu and Xu, Dong},
38
+ journal={arXiv preprint arXiv:2506.09740},
39
+ year={2025}
40
+ }
41
+ ```