I2V Lora not loaded

#18
by jimmymx - opened

Does anyone know why this happens?

Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "asyncio\events.py", line 88, in _run
File "asyncio\proactor_events.py", line 165, in _call_connection_lost
ConnectionResetError: [WinError 10054] Se ha forzado la interrupción de una conexión existente por el host remoto
FETCH ComfyRegistry Data: 10/93
FETCH ComfyRegistry Data: 15/93
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
FETCH ComfyRegistry Data: 20/93
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
FETCH ComfyRegistry Data: 25/93
FETCH ComfyRegistry Data: 30/93
gguf qtypes: Q6_K (169), F32 (73)
Attempting to recreate sentencepiece tokenizer from GGUF file metadata...
FETCH ComfyRegistry Data: 35/93
FETCH ComfyRegistry Data: 40/93
Created tokenizer with vocab size of 256384
Dequantizing token_embd.weight to prevent runtime OOM.
FETCH ComfyRegistry Data: 45/93
FETCH ComfyRegistry Data: 50/93
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
gguf qtypes: F16 (694), Q6_K (400), F32 (1)
model weight dtype torch.float16, manual cast: None
model_type FLOW
lora key not loaded: blocks.0.cross_attn.k.alpha
lora key not loaded: blocks.0.cross_attn.k.lora_down.weight
lora key not loaded: blocks.0.cross_attn.k.lora_up.weight
lora key not loaded: blocks.0.cross_attn.o.alpha
lora key not loaded: blocks.0.cross_attn.o.lora_down.weight
lora key not loaded: blocks.0.cross_attn.o.lora_up.weight
lora key not loaded: blocks.0.cross_attn.q.alpha
lora key not loaded: blocks.0.cross_attn.q.lora_down.weight
lora key not loaded: blocks.0.cross_attn.q.lora_up.weight
lora key not loaded: blocks.0.cross_attn.v.alpha
lora key not loaded: blocks.0.cross_attn.v.lora_down.weight
lora key not loaded: blocks.0.cross_attn.v.lora_up.weight
......

@jimmymx The current released lora only works with latest Kijai WanVigenwrapper. If you are using the native comfyUI workflow, there may be bugs. We are currently working for a solution. Before it's released, please try the released workflow with Kijai WanVigenwrapper.

https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1-forKJ.json

@jimmymx

We update the weights and there should be no loading errors now.

We have released the template workflow for both T2V and I2V. Please refer to the discussion. The workflow file that contains "forKJ" is for kijai's wrapper and that contains "NativeComfy" is for native comfyUI.

Sign up or log in to comment