Official AQLM quantization of meta-llama/Llama-3.2-3B finetuned with PV-Tuning.

For this quantization, we used 2 codebooks of 8 bits and groupsize of 8.

Results:

Model Quantization MMLU (5-shot) ArcC ArcE Hellaswag PiQA Winogrande Model size, Gb
meta-llama/Llama-3.2-3B-Instruct fp16 0.5984 0.4369 0.7428 0.5224 0.7579 0.6732 6.4
2x8g8 0.4842 0.3686 0.7066 0.4833 0.7274 0.6346 1.5
Downloads last month
16
Safetensors
Model size
1.1B params
Tensor type
FP16
·
I8
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ISTA-DASLab/Llama-3.2-3B-Instruct-AQLM-PV-2Bit-2x8

Quantized
(268)
this model

Collection including ISTA-DASLab/Llama-3.2-3B-Instruct-AQLM-PV-2Bit-2x8