FlexIP: Dynamic Control of Preservation and Personality for Customized Image Generation
Abstract
With the rapid advancement of 2D generative models, preserving subject identity while enabling diverse editing has emerged as a critical research focus. Existing methods typically face inherent trade-offs between identity preservation and personalized manipulation. We introduce FlexIP, a novel framework that decouples these objectives through two dedicated components: a Personalization Adapter for stylistic manipulation and a Preservation Adapter for identity maintenance. By explicitly injecting both control mechanisms into the generative model, our framework enables flexible parameterized control during inference through dynamic tuning of the weight adapter. Experimental results demonstrate that our approach breaks through the performance limitations of conventional methods, achieving superior identity preservation while supporting more diverse personalized generation capabilities (Project Page: https://flexip-tech.github.io/flexip/).
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability (2025)
- InstaFace: Identity-Preserving Facial Editing with Single Image Inference (2025)
- Personalize Anything for Free with Diffusion Transformer (2025)
- MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing (2025)
- Towards More Accurate Personalized Image Generation: Addressing Overfitting and Evaluation Bias (2025)
- FlipConcept: Tuning-Free Multi-Concept Personalization for Text-to-Image Generation (2025)
- Dynamic Concepts Personalization from Single Videos (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper