Please Explain

#1
by PlayAI - opened

Hi there—could you walk me through how you converted the Mistral-Small-3.2-24B-Instruct-2506 model to GGUF format and got it working in Ollama? I’ve tried using other users quantized versions myself, but none of them seem to run in ollama—what are you doing differently?

Hi there—could you walk me through how you converted the Mistral-Small-3.2-24B-Instruct-2506 model to GGUF format and got it working in Ollama? I’ve tried using other users quantized versions myself, but none of them seem to run in ollama—what are you doing differently?

Because we fixed the chat templates and everything and tool calling manually which took many hours of work! :)

Sign up or log in to comment