meta-llama/Llama-3.1-70B (Quantized)
Description
This model is a quantized version of the original model meta-llama/Llama-3.1-70B
. It has been quantized using int8_weight_only quantization with torchao.
Quantization Details
- Quantization Type: int8_weight_only
- Group Size: None
Usage
You can use this model in your applications by loading it directly from the Hugging Face Hub:
from transformers import AutoModel
model = AutoModel.from_pretrained("meta-llama/Llama-3.1-70B")
- Downloads last month
- 4
Model tree for medmekk/Llama-3.1-70B-torchao-int8_weight_only
Base model
meta-llama/Llama-3.1-70B