Model Details
This model is an 4-bit version of meta-llama/Llama-3.2-3B-Instruct, generated by intel/auto-round. It is compatible with AutoGPTQ’s kernel for inference.
Quantize and Evaluate the Model
Here is the command to reproduce the result:
pip install git+https://github.com/intel/auto-round.git@llama/new/9
git clone https://github.com/intel/auto-round.git
cd auto-round
git checkout -b llama/new/9 origin/llama/new/9
cd examples/language-modeling
export model_name=/path/to/model
python3 main.py \
--model_name $model_name \
--gradient_accumulate_steps 2 \
--model_dtype bfloat16 --group_size 128 \
--train_bs 4 --iters 1000 --nsample 512 \
--format auto_gptq --disable_quanted_input \
--output_dir <INT4_MODEL_SAVE_PATH>
- Accuracy Results
Task Name | BF16 Acc | Qmodel Acc |
---|---|---|
Avg. | 0.5856 | 0.5768 |
mmlu | 0.6070 | 0.5904 |
winogrande | 0.6748 | 0.6685 |
lambada_openai | 0.6610 | 0.6604 |
arc_easy | 0.7428 | 0.7378 |
rte | 0.7256 | 0.7076 |
boolq | 0.7862 | 0.7985 |
truthfulqa_mc2 | 0.4993 | 0.4970 |
piqa | 0.7584 | 0.7508 |
hellaswag | 0.5222 | 0.5156 |
arc_challenge | 0.4369 | 0.4258 |
openbookqa | 0.2820 | 0.2480 |
truthfulqa_mc1 | 0.3305 | 0.3207 |
Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor link
Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
Cite
@article{cheng2023optimize,
title={Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs},
author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi},
journal={arXiv preprint arXiv:2309.05516},
year={2023}
}