New experimental v2 versions posted: these are designed to bring in more WAN 2.2 features!

This is an "all in one" merge of WAN 2.2, 2.1, accelerators, VAE and umt5 text encoder models into 1. FP8, which is a good compromise of VRAM usage and precision.

Super simple and designed for speed with as little sacrifice to quality as possible.

You only need to use the "Load Checkpoint" ComfyUI node and can pull the CLIP and VAE alongside the Model.

image/png

4 steps, 1 cfg, sa_solver sampler and beta scheduler highly recommended.

image/png

The new v2 models might be slightly less compatible with WAN 2.1 LORAs, as the "v2" bring in more WAN 2.2 features. Adjusting strengths of the LORAs may help (down if you see noise or artifacts). This merge will likely not be compatible with WAN 2.2 "high" LORAs, but should have good compatibility with WAN 2.1 + WAN 2.2 "low" LORAs.

image/png

image/png

Seems to work even on 8GB VRAM:

image/png

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Phr00t/WAN2.2-14B-Rapid-AllInOne

Finetuned
(3)
this model