llama-server issue
#5
by
Zer0-wastaken
- opened
I tried the SmolVLM realtime‑webcam project (https://github.com/ngxson/smolvlm-realtime-webcam) with "llama‑server -hf openbmb/MiniCPM‑V‑4_5‑gguf" , but it didn’t work. Even a local install failed, while other models run out‑of‑the‑box. I suspect a problem with llama.cpp.
@Zer0-wastaken I haven't used this framework, maybe the model is not supported in this framework yet.
it's llama.cpp
@Zer0-wastaken OK, you can check the version of llama.cpp to see if it has not been updated to the latest code. You can also provide some logs so that I can analyze the problem more accurately.