Official AQLM quantization of meta-llama/Meta-Llama-3.1-8B finetuned with PV-Tuning.

For this quantization, we used 1 codebook of 16 bits and groupsize of 16.

Results:

Model Quantization MMLU (5-shot) ArcC ArcE Hellaswag PiQA Winogrande Model size, Gb
meta-llama/Meta-Llama-3.1-8B None 0.6521 0.5145 0.8144 0.5998 0.8014 0.7356 16.1
1x16g8 0.3574 0.3464 0.6793 0.4822 0.7318 0.6275 3.4

Note

We used lm-eval=0.4.0 for evaluation.

UPD (09.08.2024)

Uploaded new version finetuned on more data for longer with better quality.

Downloads last month
7
Safetensors
Model size
1.72B params
Tensor type
FP16
·
I16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Collection including ISTA-DASLab/Meta-Llama-3.1-8B-AQLM-PV-1Bit-1x16-hf