Ollama/llama.cpp support

#10
by dpreti - opened
Almawave org

We observed that at the moment the model is not served correctly with Ollama and llama.cpp .

We are currently investigating the reasons behind this unexpected behavior.
In the meanwhile we strongly suggest to serve the model using vLLM or the Transformer library as showed in the model card.

Sign up or log in to comment