Install on Linux failed

#2
by drwootton - opened

The readme states I simply need to create a Python virtual environment then run pip install hdi1 --no-build-isolation.
My first teempt failed complaining it could not find 'setuptools' so I installed that.
The second attempt failed because It could not find torch so I installed that.
The third attempt, the hdil script ran for a while then failed with a long error text
/opt/nvidia/hpc_sdk/Linux_x86_64/25.1/compilers/include/cmath:57: warning: ignoring β€˜#pragma libm ’ [-Wunknown-pragmas]
57 | #pragma libm (ynf,y1f,y0f)
/opt/nvidia/hpc_sdk/Linux_x86_64/25.1/compilers/include/cmath:58: warning: ignoring β€˜#pragma libm ’ [-Wunknown-pragmas]
58 | #pragma libm (yn,y1,y0)
/opt/nvidia/hpc_sdk/Linux_x86_64/25.1/compilers/include/cmath:59: warning: ignoring β€˜#pragma libm ’ [-Wunknown-pragmas]
59 | #pragma libm (jnf,j1f,j0f)
/opt/nvidia/hpc_sdk/Linux_x86_64/25.1/compilers/include/cmath:60: warning: ignoring β€˜#pragma libm ’ [-Wunknown-pragmas]
60 | #pragma libm (jn,j1,j0)
In file included from /nvme/dave/AI/HiDream/venv/lib/python3.12/site-packages/torch/include/ATen/cuda/CUDAContextLight.h:6,
from /nvme/dave/AI/HiDream/venv/lib/python3.12/site-packages/torch/include/ATen/cuda/CUDAContext.h:3,
from gptqmodel_ext/marlin/marlin_cuda.cpp:10:
/nvme/dave/AI/HiDream/venv/lib/python3.12/site-packages/nvidia/cuda_runtime/include/cuda_runtime_api.h:148:10: fatal error: crt/host_defines.h: No such file or directory
148 | #include "crt/host_defines.h"
| ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command '/usr/bin/x86_64-linux-gnu-g++' failed with exit code 1
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for gptqmodel
Building wheel for logbar (pyproject.toml) ... done
Created wheel for logbar: filename=logbar-0.0.4-py3-none-any.whl size=12714 sha256=744acd8a876892b350aa5b90242e15a0d0cd175adc8e1e0a291ffb9a1e9942fb
Stored in directory: /home/dave/.cache/pip/wheels/f9/f2/88/c815d0365e734e5f55bbf798723463bf17472ab984b92e4ee0
Building wheel for tokenicer (pyproject.toml) ... done
Created wheel for tokenicer: filename=tokenicer-0.0.4-py3-none-any.whl size=11443 sha256=530b2b8c8b9c3c07d82fc0b02b690f2acb4becf319ac3030f1f4586120deedc8
Stored in directory: /home/dave/.cache/pip/wheels/ee/7b/2d/693b10c9fc0cce9476ba24f4876d23892940cba2f72e092fb5
Successfully built device-smi logbar tokenicer
Failed to build gptqmodel
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (gptqmodel)

This is an Ubuntu 24.10 Linux system

host_defines.h actually exists at /usr/local/cuda-12.6/targets/x86_64-linux/include/crt/host_defines.h

I do get a warning that 12.6 is a minor conflict with torch built with cuda-12.4, but the warning says this should not be a problem, and I have been running ComfyUI and LLM models with this setup for a while.

Owner

It seems like installation failed for the gptqmodel quantization library. Please check the installation guide or open an issue on their repo: https://github.com/ModelCloud/GPTQModel

Sign up or log in to comment