https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501
Q6_K_C: Q6_K weights, copied output, copied embed
Fits 24K CTX on a 24GiB GPU
- Downloads last month
- 13
Hardware compatibility
Log In
to view the estimation
6-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support