Papers
arxiv:2506.19103

Inverse-and-Edit: Effective and Fast Image Editing by Cycle Consistency Models

Published on Jun 23
· Submitted by ai-alanov on Jun 26
#3 Paper of the day
Authors:
,
,

Abstract

A new framework using consistency models enhances image inversion and editing efficiency, achieving top performance with fewer steps.

AI-generated summary

Recent advances in image editing with diffusion models have achieved impressive results, offering fine-grained control over the generation process. However, these methods are computationally intensive because of their iterative nature. While distilled diffusion models enable faster inference, their editing capabilities remain limited, primarily because of poor inversion quality. High-fidelity inversion and reconstruction are essential for precise image editing, as they preserve the structural and semantic integrity of the source image. In this work, we propose a novel framework that enhances image inversion using consistency models, enabling high-quality editing in just four steps. Our method introduces a cycle-consistency optimization strategy that significantly improves reconstruction accuracy and enables a controllable trade-off between editability and content preservation. We achieve state-of-the-art performance across various image editing tasks and datasets, demonstrating that our method matches or surpasses full-step diffusion models while being substantially more efficient. The code of our method is available on GitHub at https://github.com/ControlGenAI/Inverse-and-Edit.

Community

Paper submitter

We propose a new image editing framework using consistency models that achieves high-fidelity inversion and precise editing in just 4 steps, outperforming traditional diffusion models in efficiency while maintaining quality. Code available at GitHub.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.19103 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.19103 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.19103 in a Space README.md to link it from this page.

Collections including this paper 1