This won't run in Ollama

#1
by tebal91901 - opened

ollama run hf.co/bartowski/huihui-ai_Huihui-gpt-oss-20b-BF16-abliterated-GGUF:Q6_K
Error: 500 Internal Server Error: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-24f679be856db154e4d106de5d25627fe68c96aff2a7072f6bcc9e7b66d0faf8

yes the version given in the model and gpt-oss and ollama wants to see gptoss

yes the version given in the model and gpt-oss and ollama wants to see gptoss

So, I just change the filename to gptoss?

Did anyone find any fix?

The term gpt-oss is written directly into the guff file. It can't be modified like that in gptoss because that would change the address of all internal data.

I modified ollama to access both, but then another error occurred because llama.cpp doesn't accept the new compression yet. I have to wait for llama.cpp to release an update.

so how do we even use it

hopefully Ollama fixes it sometime soon :(

here's the explanation for anyone still wondering:

https://github.com/ollama/ollama/issues/11714#issuecomment-3172893576

image.png

here's the explanation for anyone still wondering:

https://github.com/ollama/ollama/issues/11714#issuecomment-3172893576

image.png

Thank yoU!

Sign up or log in to comment