Edit model card

phi-3-onnx

phi-3-onnx is an ONNX int4 quantized version of Microsoft Phi-3-mini-4k-instruct, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.

Model Description

  • Developed by: microsoft
  • Quantized by: llmware
  • Model type: phi3
  • Parameters: 3.8 billion
  • Model Parent: microsoft/Phi-3-mini-4k-instruct
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Chat, general-purpose LLM
  • Quantization: int4

Model Card Contact

llmware on hf

llmware website

Downloads last month
15
Inference API
Inference API (serverless) has been turned off for this model.

Model tree for llmware/phi-3-onnx

Quantized
(98)
this model

Collection including llmware/phi-3-onnx