File size: 986 Bytes
b7eb0f8 0f9a83f b7eb0f8 0f9a83f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
---
base_model:
- meta-llama/Llama-3.2-1B
---
# meta-llama/Llama-3.2-1B (Quantized)
## Description
This model is a quantized version of the original model `meta-llama/Llama-3.2-1B`. It was quantized using TorchAo.
## Quantization Details
- **Quantization Parameters**: `TorchAoConfig("int8_weight_only")`
## Usage
You can use this model in your applications by loading it directly from the Hugging Face Hub.
In order to run the inference with `Llama-3.2-1B-TORCHAO-W8`, `torch` and`torchao` need to be installed as:
```python
pip install torch torchao --upgrade
```
Then, preferably the latest version of transformers need to be installed, as:
```python
pip install transformers[accelerate] --upgrade
```
To run the inference the model can be instantiated as any other causal language modeling model via AutoModelForCausalLM and run the inference normally.
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Llama-3.2-1B-TORCHAO-W8") |