Image-to-Image
Diffusers
nielsr HF Staff commited on
Commit
639d1e0
·
verified ·
1 Parent(s): 8fab4ad

Improve model card with metadata and links

Browse files

This PR improves the model card by:

- Adding the `pipeline_tag: image-to-image` to reflect the model's functionality.
- Specifying the `library_name: diffusers` as the model uses the Diffusers library.
- Linking to the project page and GitHub repository for detailed usage instructions and examples.

Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -1,3 +1,17 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: image-to-image
4
+ library_name: diffusers
5
+ ---
6
+
7
+ <h1 align="center"> REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers </h1>
8
+
9
+ <p align="center">
10
+ [🌐 Project Page](https://end2end-diffusion.github.io) &ensp;
11
+ [📃 Paper](https://arxiv.org/abs/2504.10483) &ensp;
12
+ [🤗 Github](https://github.com/REPA-E/REPA-E)
13
+ </p>
14
+
15
+ ![](assets/vis-examples.jpg)
16
+
17
+ REPA-E enables stable and effective joint training of both the VAE and the diffusion model, significantly accelerating training and improving generation quality. It achieves state-of-the-art FID scores on ImageNet 256×256. For detailed usage instructions, including environment setup, training, and evaluation, please refer to the [project page](https://end2end-diffusion.github.io) and the [GitHub repository](https://github.com/REPA-E/REPA-E).