llama.cpp support when?
#7
by
alanzhuly
- opened
Great model and use case. I want to try locally on Mac!
Thank you for your interest in the Phi-4-multimodal!
Our Huggingface friend has just shared that the model can run with llama.cpp
All you need is:
brew install llama.cpp
Followed by:
llama-cli -hf bartowski/microsoft_Phi-4-mini-instruct-GGUF:Q8_0
That's another one model. Not multimodal.