Papers
arxiv:2502.04320

ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features

Published on Feb 6
· Submitted by tmeral on Feb 7
#3 Paper of the day
Authors:
,
,

Abstract

Do the rich representations of multi-modal diffusion transformers (DiTs) exhibit unique properties that enhance their interpretability? We introduce ConceptAttention, a novel method that leverages the expressive power of DiT attention layers to generate high-quality saliency maps that precisely locate textual concepts within images. Without requiring additional training, ConceptAttention repurposes the parameters of DiT attention layers to produce highly contextualized concept embeddings, contributing the major discovery that performing linear projections in the output space of DiT attention layers yields significantly sharper saliency maps compared to commonly used cross-attention mechanisms. Remarkably, ConceptAttention even achieves state-of-the-art performance on zero-shot image segmentation benchmarks, outperforming 11 other zero-shot interpretability methods on the ImageNet-Segmentation dataset and on a single-class subset of PascalVOC. Our work contributes the first evidence that the representations of multi-modal DiT models like Flux are highly transferable to vision tasks like segmentation, even outperforming multi-modal foundation models like CLIP.

Community

Paper author Paper submitter

In our study, we repurpose DiT attention layers using linear projections to generate sharper, more contextualized saliency maps, achieving state-of-the-art zero-shot segmentation on benchmarks like ImageNet-Segmentation and PascalVOC. I would love to hear your thoughts.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

really nice, I think it was clear with this idea: https://arxiv.org/abs/2410.06940 that the internal representations would be pretty crucial in diffusion, but I wouldn't have expected such distinct results.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.04320 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.04320 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.04320 in a Space README.md to link it from this page.

Collections including this paper 3