Self-Forcing LoRA cannot accelerate VACE 14B

#2
by makisekurisu-jp - opened

It seems to only work for accelerating 1.3B models.
workflow.png

1.png

https://huggingface.co/Kijai/WanVideo_comfy/tree/main

Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors

Sign up or log in to comment