phi-4-mini-ov
phi-4-mini-ov is an OpenVino int4 quantized version of Microsoft Phi-4-Mini, providing a fast, small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
Model Description
- Developed by: microsoft
- Quantized by: llmware
- Model type: phi4-mini
- Parameters: 4 billion
- Model Parent: microsoft/phi-4-mini-instruct
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: Chat, general-purpose LLM
- Quantization: int4
Model Card Contact
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support