ArtusDev's picture
Update README.md
65e67d6 verified
metadata
base_model: mistralai/Mistral-Small-3.2-24B-Instruct-2506
base_model_relation: quantized
quantized_by: ArtusDev
language:
  - en
  - fr
  - de
  - es
  - pt
  - it
  - ja
  - ko
  - ru
  - zh
  - ar
  - fa
  - id
  - ms
  - ne
  - pl
  - ro
  - sr
  - sv
  - tr
  - uk
  - vi
  - hi
  - bn
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
  - exl3

EXL3 Quants of mistralai/Mistral-Small-3.2-24B-Instruct-2506

EXL3 quants of mistralai/Mistral-Small-3.2-24B-Instruct-2506 using exllamav3 for quantization.

Based on the HF conversion of the base 3.2 model by unsloth: unsloth/Mistral-Small-3.2-24B-Instruct-2506

Quants

Quant(Revision) Bits per Weight Head Bits
2.0_H6 2.0 6
2.5_H6 2.5 6
3.0_H6 3.0 6
3.5_H6 3.5 6
4.0_H6 4.0 6
4.5_H6 4.5 6
5.0_H6 5.0 6
5.5_H8 5.5 8
6.0_H6 6.0 6
8.0_H6 8.0 6
8.0_H8 8.0 8

Downloading quants with huggingface-cli

Click to view download instructions

Install hugginface-cli:

pip install -U "huggingface_hub[cli]"

Download quant by targeting the specific quant revision (branch):

huggingface-cli download ArtusDev/mistralai_Mistral-Small-3.2-24B-Instruct-2506-EXL3 --revision "5.0bpw_H6" --local-dir ./