runtime error

Exit code: 1. Reason: 4, 503MB/s] hyperflux_00001_.q8_0.gguf: 16%|β–ˆβ–Œ | 2.03G/13.0G [00:02<00:09, 1.10GB/s] hyperflux_00001_.q8_0.gguf: 24%|β–ˆβ–ˆβ– | 3.14G/13.0G [00:03<00:09, 1.09GB/s] hyperflux_00001_.q8_0.gguf: 35%|β–ˆβ–ˆβ–ˆβ–Œ | 4.58G/13.0G [00:04<00:06, 1.23GB/s] hyperflux_00001_.q8_0.gguf: 47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 6.17G/13.0G [00:05<00:05, 1.35GB/s] hyperflux_00001_.q8_0.gguf: 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 7.72G/13.0G [00:06<00:03, 1.42GB/s] hyperflux_00001_.q8_0.gguf: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 9.16G/13.0G [00:07<00:02, 1.42GB/s] hyperflux_00001_.q8_0.gguf: 81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 10.6G/13.0G [00:08<00:01, 1.36GB/s] hyperflux_00001_.q8_0.gguf: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 12.0G/13.0G [00:09<00:00, 1.36GB/s] hyperflux_00001_.q8_0.gguf: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 13.0G/13.0G [00:10<00:00, 1.20GB/s] config.json: 0%| | 0.00/378 [00:00<?, ?B/s] config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 378/378 [00:00<00:00, 3.02MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 125, in <module> transformer = FluxTransformer2DModel.from_single_file( File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file_model.py", line 399, in from_single_file load_model_dict_into_meta( File "/usr/local/lib/python3.10/site-packages/diffusers/models/model_loading_utils.py", line 288, in load_model_dict_into_meta raise ValueError( ValueError: Cannot load because transformer_blocks.0.norm1.linear.weight expected shape torch.Size([18432, 3072]), but got torch.Size([18432, 3264]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.

Container logs:

Fetching error logs...