Update README.md
Browse files
README.md
CHANGED
@@ -14,13 +14,20 @@ license: apache-2.0
|
|
14 |
- These patched LoRAs are **compatible with** [ComfyUI-nunchaku](https://github.com/mit-han-lab/ComfyUI-nunchaku).
|
15 |
- Use the **Nunchaku FLUX LoRA Loader** node to load LoRA modules for **SVDQuant FLUX** models.
|
16 |
|
17 |
-
|
18 |
## 🛠️ Patch References
|
|
|
|
|
|
|
|
|
19 |
**Script:** [patch_comfyui_nunchaku_lora.py](https://huggingface.co/lym00/comfyui_nunchaku_lora_patch/blob/main/patch_comfyui_nunchaku_lora.py)
|
20 |
|
21 |
-
Based on
|
22 |
- **Nunchaku Issue:** [ComfyUI-nunchaku #340](https://github.com/mit-han-lab/ComfyUI-nunchaku/issues/340)
|
23 |
-
|
|
|
|
|
|
|
|
|
24 |
|
25 |
---
|
26 |
|
|
|
14 |
- These patched LoRAs are **compatible with** [ComfyUI-nunchaku](https://github.com/mit-han-lab/ComfyUI-nunchaku).
|
15 |
- Use the **Nunchaku FLUX LoRA Loader** node to load LoRA modules for **SVDQuant FLUX** models.
|
16 |
|
|
|
17 |
## 🛠️ Patch References
|
18 |
+
|
19 |
+
Some original FLUX LoRA files were missing required `final_layer.adaLN` weights needed by **ComfyUI-nunchaku’s FLUX LoRA Loader**.
|
20 |
+
This patch script automatically adds **dummy adaLN tensors** to make the LoRA compatible with **SVDQuant FLUX** models.
|
21 |
+
|
22 |
**Script:** [patch_comfyui_nunchaku_lora.py](https://huggingface.co/lym00/comfyui_nunchaku_lora_patch/blob/main/patch_comfyui_nunchaku_lora.py)
|
23 |
|
24 |
+
**Based on:**
|
25 |
- **Nunchaku Issue:** [ComfyUI-nunchaku #340](https://github.com/mit-han-lab/ComfyUI-nunchaku/issues/340)
|
26 |
+
> Node Type: `NunchakuFluxLoraLoader`
|
27 |
+
> Exception Type: `KeyError`
|
28 |
+
> Exception Message: `'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight'`
|
29 |
+
|
30 |
+
- **Example Gist:** [akedia/e0a132b5...](https://gist.github.com/akedia/e0a132b587e30413665d299ad893a60e)
|
31 |
|
32 |
---
|
33 |
|