license: mit
base_model:
- microsoft/Phi-4-mini-instruct
pipeline_tag: text-generation
tags:
- Phi
- Phi-4
Model Summary
Phi-4-mini-instruct is a lightweight open model built upon synthetic data and filtered publicly available websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-4 model family and supports 128K token context length. The model underwent an enhancement process, incorporating both supervised fine-tuning and direct preference optimization to support precise instruction adherence and robust safety measures.
π° Phi-4-mini Microsoft Blog
π Phi-4-mini Technical Report
π©βπ³ Phi Cookbook
π‘ Phi Portal
π₯οΈ Try It Azure, Huggingface
Phi-4: [mini-instruct | onnx]; multimodal-instruct; gguf
Usage
Chat format
This format is used for general conversation and instructions:
<|system|>Insert System Message<|end|><|user|>Insert User Message<|end|><|assistant|>
Tool-Enabled Function-Calling Format
This format is used when the user wants the model to provide function calls based on the given tools. The user should define the available tools in the system prompt, wrapped by <|tool|>
and <|/tool|>
tokens. The tools must be specified in JSON format using a structured JSON dump.
<|system|>
You are a helpful assistant with some tools.
<|tool|>
[
{
"name": "get_weather_updates",
"description": "Fetches weather updates for a given city using the RapidAPI Weather API.",
"parameters": {
"city": {
"description": "The name of the city for which to retrieve weather information.",
"type": "str",
"default": "London"
}
}
}
]
<|/tool|>
<|end|>
<|user|>
What is the weather like in Paris today?
<|end|>
<|assistant|>
Phi-4-mini-instruct GGUF Models
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
BF16 (Brain Float 16) β Use if BF16 acceleration is available
- A 16-bit floating-point format designed for faster computation while retaining good precision.
- Provides similar dynamic range as FP32 but with lower memory usage.
- Recommended if your hardware supports BF16 acceleration (check your deviceβs specs).
- Ideal for high-performance inference with reduced memory footprint compared to FP32.
π Use BF16 if:
β Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
β You want higher precision while saving memory.
β You plan to requantize the model into another format.
π Avoid BF16 if:
β Your hardware does not support BF16 (it may fall back to FP32 and run slower).
β You need compatibility with older devices that lack BF16 optimization.
F16 (Float 16) β More widely supported than BF16
- A 16-bit floating-point high precision but with less of range of values than BF16.
- Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
π Use F16 if:
β Your hardware supports FP16 but not BF16.
β You need a balance between speed, memory usage, and accuracy.
β You are running on a GPU or another device optimized for FP16 computations.
π Avoid F16 if:
β Your device lacks native FP16 support (it may run slower than expected).
β You have memory limtations.
Quantized Models (Q4_K, Q6_K, Q8, etc.) β For CPU & Low-VRAM Inference
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- Lower-bit models (Q4_K) β Best for minimal memory usage, may have lower precision.
- Higher-bit models (Q6_K, Q8_0) β Better accuracy, requires more memory.
π Use Quantized Models if:
β You are running inference on a CPU and need an optimized model.
β Your device has low VRAM and cannot load full-precision models.
β You want to reduce memory footprint while keeping reasonable accuracy.
π Avoid Quantized Models if:
β You need maximum accuracy (full-precision models are better for this).
β Your hardware has enough VRAM for higher-precision formats (BF16/F16).
Summary Table: Model Format Selection
Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
---|---|---|---|---|
BF16 | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
F16 | High | High | FP16-supported devices | GPU inference when BF16 isnβt available |
Q4_K | Low | Very Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
Q6_K | Medium Low | Low | CPU with more memory | Better accuracy while still being quantized |
Q8 | Medium | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
Included Files & Details
phi-4-mini-bf16.gguf
- Model weights preserved in BF16.
- Use this if you want to requantize the model into a different format.
- Best if your device supports BF16 acceleration.
phi-4-mini-f16.gguf
- Model weights stored in F16.
- Use if your device supports FP16, especially if BF16 is not available.
phi-4-mini-bf16-q8.gguf
- Output & embeddings remain in BF16.
- All other layers quantized to Q8_0.
- Use if your device supports BF16 and you want a quantized version.
phi-4-mini-f16-q8.gguf
- Output & embeddings remain in F16.
- All other layers quantized to Q8_0.
phi-4-mini-q4_k_l.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q4_K.
- Good for CPU inference with limited memory.
phi-4-mini-q4_k_m.gguf
- Similar to Q4_K.
- Another option for low-memory CPU inference.
phi-4-mini-q4_k_s.gguf
- Smallest Q4_K variant, using less memory at the cost of accuracy.
- Best for very low-memory setups.
phi-4-mini-q6_k_l.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q6_K .
phi-4-mini-q6_k_m.gguf
- A mid-range Q6_K quantized model for balanced performance .
- Suitable for CPU-based inference with moderate memory.
phi-4-mini-q8.gguf
- Fully Q8 quantized model for better accuracy.
- Requires more memory but offers higher precision.