This is a static quant of mistralai/Voxtral-Small-24B-2507, made with llama.cpp commit 00fa15fedc79263fa0285e6a3bbb0cfb3e3878a2.
Due to limited storage space on my computer, the q4_K_M quant is requantized from the q8_0 instead of the full precision model, wich might not be ideal for the best quality.
(the audio mmproj also works with other existing based on Mistral Small 3.2. I tried it, and it worked surprisingly well)
- Downloads last month
- 106
Hardware compatibility
Log In
to view the estimation
4-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for stduhpf/Voxtral-Small-24B-2507-GGUF
Base model
mistralai/Mistral-Small-24B-Base-2501
Finetuned
mistralai/Voxtral-Small-24B-2507