Image-to-Image
Diffusers
sit-repae-invae / README.md
nielsr's picture
nielsr HF Staff
Add model card metadata and link to paper and project page
43ddd80 verified
|
raw
history blame
1.28 kB
metadata
license: mit
library_name: diffusers
pipeline_tag: image-to-image

license: mit library_name: diffusers pipeline_tag: image-to-image

REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers

This model implements the REPA-E approach for end-to-end tuning of latent diffusion transformers, as described in the paper REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers. REPA-E enables stable and effective joint training of both the VAE and the diffusion model, leading to faster training and improved generation quality.

For more information, please refer to the following resources:

Usage

You can use this model with the diffusers library. Here's a basic example:

from diffusers import DiffusionPipeline

# Load the pipeline
pipeline = DiffusionPipeline.from_pretrained("REPA-E/your-model-name") # Replace "REPA-E/your-model-name"

# Generate an image
image = pipeline().images[0]

# Save the image
image.save("generated_image.png")

Please refer to the GitHub repository for detailed instructions and more advanced usage examples.