[SOLVED] Repository is not GGUF or is not compatible with llama.cpp

#3
by abyssalaxioms - opened

Posted in the wrong version but keeping for posterity.

You may encounter this error:

Error: pull model manifest: 400: Repository is not GGUF or is not compatible with llama.cpp

For anyone else running into the same error as me when trying to use this model, here's the fix that I found on Reddit:

See my response in the other thread to see how I got this model working locally using vllm:
https://huggingface.co/unsloth/QwQ-32B-unsloth-bnb-4bit/discussions/4

Sign up or log in to comment