Update README.md
Browse files
README.md
CHANGED
|
@@ -83,6 +83,65 @@ pipeline = StableDiffusionXLPipeline.from_pretrained(
|
|
| 83 |
model, scheduler=dpmsolver, torch.dtype=torch.float16,
|
| 84 |
).to("cuda")
|
| 85 |
```
|
| 86 |
-
##
|
| 87 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
# That's all for this repository. Thank you for reading my silly note. Have a nice day!
|
|
|
|
| 83 |
model, scheduler=dpmsolver, torch.dtype=torch.float16,
|
| 84 |
).to("cuda")
|
| 85 |
```
|
| 86 |
+
## Variational Autoencoder (VAE) Installation 🖼
|
| 87 |
+
There are two ways to get [Variational Autoencoder (VAE)](https://huggingface.co/learn/computer-vision-course/en/unit5/generative-models/variational_autoencoders) file into the model. The first one
|
| 88 |
+
is to download the file manually or remotely use code. In this repository,
|
| 89 |
+
I'll explain the method of using code as the efficient way. First step is to download the VAE file.
|
| 90 |
+
You can download the file manually or remotely, but I recommend you to use the remote. Usually, VAE
|
| 91 |
+
files are in .safetensors format. There are two websites you can visit to download VAE. Those are HuggingFace
|
| 92 |
+
and CivitAI.
|
| 93 |
+
#### From HuggingFace 😊
|
| 94 |
+
This method is pretty straightforward. Pick any VAE's repository you like. Then, navigate to "Files" and
|
| 95 |
+
the VAE's file. Make sure to click the file.
|
| 96 |
+
|
| 97 |
+
Copy the "Copy Download Link" for the file, you'll need this.
|
| 98 |
+
|
| 99 |
+
Next step is to load [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl) pipeline into the code.
|
| 100 |
+
```py
|
| 101 |
+
from diffusers import StableDiffusionXLPipeline, AutoencoderKL
|
| 102 |
+
```
|
| 103 |
+
Finally, load the VAE file into [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl).
|
| 104 |
+
```py
|
| 105 |
+
link = "your vae's link"
|
| 106 |
+
model = "IDK-ab0ut/Yiffymix_v51"
|
| 107 |
+
vae = AutoencoderKL.from_single_file(link).to("cuda")
|
| 108 |
+
pipeline = StableDiffusionXLPipeline.from_pretrained(
|
| 109 |
+
model, vae=vae).to("cuda")
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
If you're using FP16 for the model, it's essential to also use FP16 for the VAE.
|
| 113 |
+
```py
|
| 114 |
+
link = "your vae's link"
|
| 115 |
+
model = "IDK-ab0ut/Yiffymix_v51"
|
| 116 |
+
vae = AutoencoderKL.from_single_file(
|
| 117 |
+
link, torch_dtype=torch.float16).to("cuda")
|
| 118 |
+
pipeline = StableDiffusionXLPipeline.from_pretrained(
|
| 119 |
+
model, torch_dtype=torch.float16,
|
| 120 |
+
vae=vae).to("cuda")
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
In case if you're experiencing `HTTP404` error because
|
| 124 |
+
the program can't resolve your link, here's a simple fix.
|
| 125 |
+
|
| 126 |
+
First, download [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index) using `pip`.
|
| 127 |
+
```py
|
| 128 |
+
!pip install --upgrade huggingface_hub
|
| 129 |
+
```
|
| 130 |
+
Import [hf_hub_download()](https://huggingface.co/docs/huggingface_hub/en/guides/download) from [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index).
|
| 131 |
+
```py
|
| 132 |
+
from huggingface_hub import hf_hub_download
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
Next, instead of direct link to the file, you want to use the repository ID.
|
| 136 |
+
```py
|
| 137 |
+
repo = "username/model"
|
| 138 |
+
file = "the vae's file.safetensors"
|
| 139 |
+
vae = AutoencoderKL.from_single_file(hf_hub_download(repo_id=repo, filename=file)).to("cuda")
|
| 140 |
+
# use 'torch_dtype=torch.float16' for FP16.
|
| 141 |
+
# add 'subfolder="folder_name"' argument if the VAE is in specific folder.
|
| 142 |
+
```
|
| 143 |
+
#### From CivitAI 🇨
|
| 144 |
+
It's more tricky if the VAE is in [CivitAI](civitai.com), because you can't use
|
| 145 |
+
`from_single_file()` method. It only works for files inside HuggingFace. To solve this issue, you may
|
| 146 |
+
use `wget` or `curl` command to get the file from outside HuggingFace. (To be continued)
|
| 147 |
# That's all for this repository. Thank you for reading my silly note. Have a nice day!
|