β Patched LoRAs
Patched LoRAs | Original LoRAs |
---|---|
flux_kontext_deblur_nunchaku.safetensors | civitai or civitaiarchive |
flux_kontext_face_detailer_nunchaku.safetensors | civitai or civitaiarchive |
β‘ Usage
- These patched LoRAs are compatible with ComfyUI-nunchaku.
- Use the Nunchaku FLUX LoRA Loader node to load LoRA modules for SVDQuant FLUX models.
π οΈ Patch References
Some original FLUX LoRA files were missing required final_layer.adaLN
weights needed by ComfyUI-nunchakuβs FLUX LoRA Loader.
This patch script automatically adds dummy adaLN tensors to make the LoRA compatible with SVDQuant FLUX models.
Script: patch_comfyui_nunchaku_lora.py
Based on:
Nunchaku Issue: ComfyUI-nunchaku #340
Node Type:
NunchakuFluxLoraLoader
Exception Type:KeyError
Exception Message:'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight'
Example Gist: akedia/e0a132b5...
π Example Patch Log
Running patch_comfyui_nunchaku_lora.py
π Universal final_layer.adaLN LoRA patcher (.safetensors)
Enter path to input LoRA .safetensors file: Flux_kontext_deblur.safetensors
Enter path to save patched LoRA .safetensors file: flux_kontext_deblur_nunchaku.safetensors
β
Loaded 610 tensors from: Flux_kontext_deblur.safetensors
π Found final_layer-related keys:
- lora_unet_final_layer_linear.lora_down.weight
- lora_unet_final_layer_linear.lora_up.weight
π Checking for final_layer keys with prefix 'lora_unet_final_layer'
Linear down: lora_unet_final_layer_linear.lora_down.weight
Linear up: lora_unet_final_layer_linear.lora_up.weight
β
Has final_layer.linear: True
β
Has final_layer.adaLN_modulation_1: False
β
Added dummy adaLN weights:
- lora_unet_final_layer_adaLN_modulation_1.lora_down.weight (shape: torch.Size([16, 3072]))
- lora_unet_final_layer_adaLN_modulation_1.lora_up.weight (shape: torch.Size([64, 16]))
β
Patched file saved to: flux_kontext_deblur_nunchaku.safetensors
Total tensors now: 612
π Verifying patched keys:
- lora_unet_final_layer_adaLN_modulation_1.lora_down.weight
- lora_unet_final_layer_adaLN_modulation_1.lora_up.weight
- lora_unet_final_layer_linear.lora_down.weight
- lora_unet_final_layer_linear.lora_up.weight
β
Contains adaLN after patch: True
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support