OOM during VAE decoding
I always get an OOM error in the VAE decoding step. This is with 24 GB of VRAM (7900 XTX, official rocm 7.1 pytorch wheels from AMD). This doesn't happen with the other model I was using previously (hunyuanvideo1.5_720p_i2v-Q6_K.gguf). Any suggestions?
Try using tiled vae decoding. I just include the usual fp16 vae.
The provided workflow already uses tiled VAE decoding. It's not working.
The GGUF model I was using was ~7GB in size, but this one is 18 GB. Perhaps that is why? I was under the impression that ComfyUI would be able to unload the main model before it tries the VAE decoding step, but apparently not, even if I load the VAE separately.
Update: Switching out the "VAE Decode (Tiled)" node for the standard VAE Decode node worked!
Thank you for this model. It's TONS faster than the OG model!
Update: Switching out the "VAE Decode (Tiled)" node for the standard VAE Decode node worked!
Note that it's slower and lossy a bit
Update: Switching out the "VAE Decode (Tiled)" node for the standard VAE Decode node worked!
Note that it's slower and lossy a bit
Better that than not working at all.
Better that than not working at all.
You can also save latents to decode them later
Not very practical, and the process is tedious enough as it is. Every other run fails due to OOM; at least comfy then unloads the models automatically, allowing the next run to succeed.