fbaldassarri's picture
Initial Upload
12a4ede verified
metadata
language:
  - en
  - de
  - fr
  - it
  - pt
  - hi
  - es
  - th
license: llama3.2
library_name: transformers
tags:
  - woq
  - intel-neural-compressor
  - inc
  - neural-compressor
  - intel
  - teq
  - meta
  - pytorch
  - llama
  - llama-3
model_name: Llama 3.2 1B
base_model: meta-llama/Llama-3.2-1B
inference: false
model_creator: meta-llama
pipeline_tag: text-generation
prompt_template: '{prompt} '
quantized_by: fbaldassarri

Model Information

Quantized version of meta-llama/Llama-3.2-1B using torch.float32 for quantization tuning.

  • 4 bits (INT4)
  • group size = 128
  • Symmetrical Quantization
  • Algorith method: TEQ (Trainable Equivalent Transformation for Quantization of LLMs)

Quantization framework: Intel Neural Compressor version 3.3.1

Note: this INT4 version of Llama-3.2-1B has been quantized to run inference through CPU.

Disclaimer

This quantized model comes with no warrenty. It has been developed experimetally only for research purposes.

This repository only contains contains two files: quantized_model.pt (weights structure) and qconfig.json, and the generated model is a quantized model. It needs to be used in combination with the base model meta-llama/Llama-3.2-1B.

Replication Recipe

$ conda create --name neural-compressor-3.3.1 --file requirements_conda_neural-compressor-3.3.1

$ python meta-llama_Llama-3.2-1B-TEQ-int4-gs128-sym.py

Run Inference

To run inference you can use fbaldassarri/woq-inference.

python teq_inference.py --base meta-llama/Llama-3.2-1B --model_dir ./meta-llama_Llama-3.2-1B-TEQ-int4-gs128-asym --weights_file quantized_weight.pt --config_file qconfig.json --prompt "What If you have got superpowers?" --device cpu

Note: You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

License

Llama 3.2 Community License