ONNX version of Llama 3.2 Instruct model, quantized for inference on the Snapdragon 8 Elite NPU.

Downloads last month
12
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.