runtime error
Exit code: 1. Reason: s: 99%|█████████▉| 4.95G/4.99G [00:16<00:00, 456MB/s][A model-00003-of-00004.safetensors: 100%|█████████▉| 4.99G/4.99G [00:16<00:00, 299MB/s] model-00004-of-00004.safetensors: 0%| | 0.00/1.26G [00:00<?, ?B/s][A model-00004-of-00004.safetensors: 1%| | 10.5M/1.26G [00:01<02:33, 8.14MB/s][A model-00004-of-00004.safetensors: 7%|▋ | 83.9M/1.26G [00:02<00:28, 40.8MB/s][A model-00004-of-00004.safetensors: 48%|████▊ | 598M/1.26G [00:03<00:02, 237MB/s] [A model-00004-of-00004.safetensors: 100%|█████████▉| 1.26G/1.26G [00:04<00:00, 385MB/s][A model-00004-of-00004.safetensors: 100%|█████████▉| 1.26G/1.26G [00:04<00:00, 272MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 80, in <module> tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, torch_dtype="bfloat16", device_map=device_map) File "/usr/local/lib/python3.10/site-packages/llava/model/builder.py", line 228, in load_pretrained_model model = LlavaQwenForCausalLM.from_pretrained(model_path, low_cpu_mem_usage=True, attn_implementation=attn_implementation, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 309, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4499, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2183, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2334, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.
Container logs:
Fetching error logs...