to-do: fix diffusers version mismatch

#3
by LPX55 - opened
Owner
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 256, in run_task
    res = task(*args, **kwargs) # pyright: ignore [reportCallIssue]
  File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/user/app/app_v4.py", line 180, in generate_image
    image = pipe(
  File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet.py", line 1082, in __call__
    controlnet_block_samples, controlnet_single_block_samples = self.controlnet(
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/models/controlnets/controlnet_flux.py", line 338, in forward
    encoder_hidden_states, hidden_states = block(
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/models/transformers/transformer_flux.py", line 453, in forward
    attention_outputs = self.attn(
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/models/transformers/transformer_flux.py", line 352, in forward
    return self.processor(self, hidden_states, encoder_hidden_states, attention_mask, image_rotary_emb, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/models/transformers/transformer_flux.py", line 118, in __call__
    hidden_states = dispatch_attention_fn(
  File "/usr/local/lib/python3.10/site-packages/diffusers/models/attention_dispatch.py", line 326, in dispatch_attention_fn
    return backend_fn(**kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/models/attention_dispatch.py", line 1482, in _native_attention
    out = torch.nn.functional.scaled_dot_product_attention(
TypeError: scaled_dot_product_attention() got an unexpected keyword argument 'enable_gqa'

Error: 'TypeError'

Will look into it in a few days.

Sign up or log in to comment