metadata
license: mit
library_name: diffusers
pipeline_tag: image-to-image
license: mit library_name: diffusers pipeline_tag: image-to-image
REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers
This model implements the REPA-E approach for end-to-end tuning of latent diffusion transformers, as described in the paper REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers. REPA-E enables stable and effective joint training of both the VAE and the diffusion model, leading to faster training and improved generation quality.
For more information, please refer to the following resources:
- Project Page: https://end2end-diffusion.github.io
- GitHub Repository: https://github.com/REPA-E/REPA-E
Usage
You can use this model with the diffusers
library. Here's a basic example:
from diffusers import DiffusionPipeline
# Load the pipeline
pipeline = DiffusionPipeline.from_pretrained("REPA-E/your-model-name") # Replace "REPA-E/your-model-name"
# Generate an image
image = pipeline().images[0]
# Save the image
image.save("generated_image.png")
Please refer to the GitHub repository for detailed instructions and more advanced usage examples.