lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill & Wan-AI/Wan2.1-VACE-14B Scopes Addon Experiment
โ ๏ธ Notice:
This project is intended for experimental use only.
This is an addon experiment of lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill and Wan-AI/Wan2.1-VACE-14B scopes.
The process involved extracting VACE scopes and injecting into the target models, using scripts provided by wsbagnsv1.
The FP16 model weights were then quantized to specific FP8 formats (E4M3FN and E5M2) using ComfyUI custom node ComfyUI-ModelQuantizer by lum3on.
For GGUF quants, please visit QuantStack/Wan2.1_T2V_14B_LightX2V_StepCfgDistill_VACE-GGUF.
Usage
The model files can be used in ComfyUI with the WanVaceToVideo node. Place the required model(s) in the following folders:
Type | Name | Location | Download |
---|---|---|---|
Main Model | Wan2.1_T2V_14B_LightX2V_StepCfgDistill_VACE | ComfyUI/models/diffusion_models |
Safetensors (this repo) |
Text Encoder | umt5-xxl-encoder | ComfyUI/models/text_encoders |
Safetensors / GGUF |
VAE | Wan2_1_VAE_bf16 | ComfyUI/models/vae |
Safetensors |
Notes
All original licenses and restrictions from the base models still apply.
Reference
- For an overview of Safetensors format, please see the Safetensors format.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support