runtime error

Exit code: 1. Reason: k [00:00<00:00, 9.51MB/s] configuration_deepseek.py: 0%| | 0.00/10.7k [00:00<?, ?B/s] configuration_deepseek.py: 100%|██████████| 10.7k/10.7k [00:00<00:00, 45.6MB/s] A new version of the following files was downloaded from https://huggingface.co/moonshotai/Kimi-K2-Instruct: - configuration_deepseek.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. You are using a model of type kimi_k2 to instantiate a model of type deepseek_v3. This is not supported for all configurations of models and can yield errors. modeling_deepseek.py: 0%| | 0.00/75.8k [00:00<?, ?B/s] modeling_deepseek.py: 100%|██████████| 75.8k/75.8k [00:00<00:00, 151MB/s] A new version of the following files was downloaded from https://huggingface.co/moonshotai/Kimi-K2-Instruct: - modeling_deepseek.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. Traceback (most recent call last): File "/home/user/app/app.py", line 7, in <module> model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.float16) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 593, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 315, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4819, in from_pretrained hf_quantizer.validate_environment( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_finegrained_fp8.py", line 48, in validate_environment raise RuntimeError("No GPU or XPU found. A GPU or XPU is needed for FP8 quantization.") RuntimeError: No GPU or XPU found. A GPU or XPU is needed for FP8 quantization.

Container logs:

Fetching error logs...