|
Combined and quantized models for WanVideo, originating from here: |
|
|
|
https://huggingface.co/Wan-AI/ |
|
|
|
Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper and ComfyUI native WanVideo nodes. |
|
|
|
Other model sources: |
|
|
|
TinyVAE from https://github.com/madebyollin/taehv |
|
|
|
SkyReels: https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9 |
|
|
|
WanVideoFun: https://huggingface.co/collections/alibaba-pai/wan21-fun-v11-680f514c89fe7b4df9d44f17 |
|
|
|
CausVid 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid |
|
|
|
CausVid 1.3B: https://huggingface.co/tianweiy/CausVid |
|
|
|
AccVideo: https://huggingface.co/aejion/AccVideo-WanX-T2V-14B |
|
|
|
Phantom: https://huggingface.co/bytedance-research/Phantom |
|
|
|
ATI: https://huggingface.co/bytedance-research/ATI |
|
|
|
--- |
|
CausVid LoRAs are experimental extractions from the CausVid finetunes, the aim with them is to benefit from the distillation in CausVid, rather than any actual causal inference. |
|
--- |
|
v1 = direct extraction, has adverse effects on motion and introduces flashing artifact at full strength. |
|
|
|
v1.5 = same as above, but without the first block which fixes the flashing at full strength. |
|
|
|
v2 = further pruned version with only attention layers and no first block, fixes flashing and retains motion better, needs more steps and can also benefit from cfg. |