File size: 980 Bytes
4d6c97c |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
---
license: apache-2.0
base_model:
- mistralai/Mistral-Small-24B-Base-2501
---
# Mistral-Small-24B-Base-2501-GGUF
This repo provides two GGUF quantizations of [mistralai/Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501):
| Filename | File size | Description | TLDR |
| -------------------------------------------- | --------- | ------------------------------------------------------------------------ | ---------------------------------------- |
| Mistral-Small-24B-Base-2501-q8_0-q4_K_S.gguf | 14.05GB | q4\_K\_S quantization using q8_0 for token embeddings and output tensors | Good quality, smaller size |
| Mistral-Small-24B-Base-2501-q8_0-q6_K.gguf | 19.67GB | q6_K quantization using q8_0 for token embeddings and output tensors | Practically perfect quality, larger size | |