Official AQLM quantization of meta-llama/Llama-3.2-1B finetuned with PV-Tuning.

For this quantization, we used 2 codebooks of 8 bits and groupsize of 8.

Results:

Model Quantization MMLU (5-shot) ArcC ArcE Hellaswag PiQA Winogrande Model size, Gb
meta-llama/Llama-3.2-1B fp16 0.3195 0.3123 0.6553 0.4772 0.7448 0.6054 2.5
2x8g8 0.2465 0.2713 0.5896 0.4034 0.7067 0.5564 0.8
Downloads last month
381
Safetensors
Model size
507M params
Tensor type
FP16
·
I8
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ISTA-DASLab/Llama-3.2-1B-AQLM-PV-2Bit-2x8

Quantized
(121)
this model

Collection including ISTA-DASLab/Llama-3.2-1B-AQLM-PV-2Bit-2x8