Papers
arxiv:2509.22244

FlashEdit: Decoupling Speed, Structure, and Semantics for Precise Image Editing

Published on Sep 26
· Submitted by taesiri on Sep 29
Authors:
,
,
,
,
,
,

Abstract

FlashEdit enables real-time, high-fidelity image editing with diffusion models through efficient inversion, background preservation, and localized attention mechanisms.

AI-generated summary

Text-guided image editing with diffusion models has achieved remarkable quality but suffers from prohibitive latency, hindering real-world applications. We introduce FlashEdit, a novel framework designed to enable high-fidelity, real-time image editing. Its efficiency stems from three key innovations: (1) a One-Step Inversion-and-Editing (OSIE) pipeline that bypasses costly iterative processes; (2) a Background Shield (BG-Shield) technique that guarantees background preservation by selectively modifying features only within the edit region; and (3) a Sparsified Spatial Cross-Attention (SSCA) mechanism that ensures precise, localized edits by suppressing semantic leakage to the background. Extensive experiments demonstrate that FlashEdit maintains superior background consistency and structural integrity, while performing edits in under 0.2 seconds, which is an over 150times speedup compared to prior multi-step methods. Our code will be made publicly available at https://github.com/JunyiWuCode/FlashEdit.

Community

Paper submitter
•
edited 7 days ago

Text-guided image editing with diffusion models has achieved remarkable quality but suffers from prohibitive latency, hindering real-world applications. We introduce FlashEdit, a novel framework designed to enable high-fidelity, real-time image editing. Its efficiency stems from three key innovations: (1) a One-Step Inversion-and-Editing (OSIE) pipeline that bypasses costly iterative processes; (2) a Background Shield (BG-Shield) technique that guarantees background preservation by selectively modifying features only within the edit region; and (3) a Sparsified Spatial Cross-Attention (SSCA) mechanism that ensures precise, localized edits by suppressing semantic leakage to the background. Extensive experiments demonstrate that FlashEdit maintains superior background consistency and structural integrity, while performing edits in under 0.2 seconds, which is an over 150times speedup compared to prior multi-step methods.

This paper shares a very similar idea to SwiftEdit: https://arxiv.org/abs/2412.04301 (CVPR 2025). But no citation or discussion is mentioned.

Hi,
I am the first author of SwiftEdit(https://swift-edit.github.io/), which was accepted at CVPR25 prior to this work. I found that this work strongly resembles SwiftEdit in terms of the core idea, architectural design, training strategy, and editing process.
While I fully support open research and welcome contributions that build upon our work, proper citation and acknowledgment are fundamental to research integrity. I am concerned that SwiftEdit is not cited here despite the significant overlap. I kindly request that the authors update this project page and the associated paper to give appropriate credit.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.22244 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.22244 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.22244 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.