Inversion-DPO

Official Inversion-DPO weights fine-tuned from Stable Diffusion XL. Only the trained UNet module is provided.

Paper: Inversion-DPO: Precise and Efficient Post-Training for Diffusion Models

Code Repository: https://github.com/MIGHTYEZ/Inversion-DPO

Model Description

This repository contains the fine-tuned UNet weights from the Inversion-DPO method, built upon Stable Diffusion XL. The model has been trained using Direct Preference Optimization (DPO) techniques combined with inversion methods to improve generation quality and alignment.

Quick Start

from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel
import torch

# Load the fine-tuned UNet
unet = UNet2DConditionModel.from_pretrained(
    "ezlee258258/Inversion-DPO",
    subfolder="unet"
)

# Load the pipeline with the fine-tuned UNet
pipe = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    unet=unet
)
pipe = pipe.to("cuda")

# Generate images
prompt = "A beautiful landscape with mountains and lakes"
image = pipe(prompt).images[0]
image.save("output.png")

Citation

If you use this model in your research, please cite our work:

@misc{li2025inversiondpo,
    title={Inversion-DPO: Precise and Efficient Post-Training for Diffusion Models},
    author={Zejian Li and Yize Li and Chenye Meng and Zhongni Liu and Yang Ling and Shengyuan Zhang and Guang Yang and Changyuan Yang and Zhiyuan Yang and Lingyun Sun},
    year={2025},
    eprint={2507.11554},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Acknowledgments

Built upon Stable Diffusion XL by Stability AI.

Contact

For questions and support, please visit our GitHub repository.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ezlee258258/Inversion-DPO

Finetuned
(1202)
this model