How to load VACE-GGUF and CausVid lora in diffusers ?
#8
by
neil075
- opened
I'm attempting to re-implement the ComfyUI VACE workflow using Diffusers, but I encountered an error when trying to load the GGUF-quantized transformer model. Is there a supported way to load GGUF-quantized models in Diffusers for VACE?
Attempted Approach
I tried loading the model using WanVACETransformer3DModel.from_single_file()
with the following code:
transformer = WanVACETransformer3DModel.from_single_file(
ckpt_path,
config="vace/transformer/config.json",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
)
Error
The code fails with:
ValueError: FromOriginalModelMixin is currently only compatible with [StableCascadeUNet, UNet2DConditionModel, ...], but not WanVACETransformer3DModel.
Additional Context
- GGUF Model: QuantStack/Wan2.1_14B_VACE-GGUF
- Workflow Reference: VACE Example Workflow
Any guidance or alternative approaches would be greatly appreciated!