INT8-INT4 microsoft/Phi-4-mini-instruct model
- Developed by: metascroy
- License: apache-2.0
- Quantized from Model : microsoft/Phi-4-mini-instruct
- Quantization Method : INT8-INT4
Running in a mobile app
(TODO: pte file name generation) The pte file can be run with ExecuTorch on a mobile phone. See the instructions for doing this in iOS. On iPhone 15 Pro, the model runs at (to be filled) tokens/sec and uses (to be filled) Mb of memory.
TODO: attach image
Quantization Recipe
Install the required packages:
pip install torch
pip install git+https://github.com/huggingface/transformers@main
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
pip install accelerate
Untie Embedding Weights
We want to quantize the embedding and lm_head differently. Since those layers are tied, we first need to untie the model:
from transformers import (
AutoModelForCausalLM,
AutoProcessor,
AutoTokenizer,
)
import torch
model_id = "{base_model}"
untied_model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
print(untied_model)
from transformers.modeling_utils import find_tied_parameters
print("tied weights:", find_tied_parameters(untied_model))
if getattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings"):
setattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings", False)
untied_model._tied_weights_keys = []
untied_model.lm_head.weight = torch.nn.Parameter(untied_model.lm_head.weight.clone())
print("tied weights:", find_tied_parameters(untied_model))
USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{{USER_ID}}/{{MODEL_NAME}}-untied-weights"
# save locally (we use this in the recipe)
save_to_local_path = f"{{MODEL_NAME}}-untied-weights"
untied_model.save_pretrained(save_to_local_path)
tokenizer.save_pretrained(save_to_local_path)
# or push to hub
untied_model.push_to_hub(save_to)
tokenizer.push_to_hub(save_to)
Note: to push_to_hub
you need to run
pip install -U "huggingface_hub[cli]"
huggingface-cli login
and use a token with write access, from https://huggingface.co/settings/tokens
Quantization
Use the following code to get the quantized model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
model_id = "microsoft/Phi-4-mini-instruct"
model_to_quantize = "f"{{MODEL_NAME}}-untied-weights""
from torchao.quantization.quant_api import (
IntxWeightOnlyConfig,
Int8DynamicActivationIntxWeightConfig,
ModuleFqnToConfig,
)
from torchao.quantization.granularity import PerGroup, PerAxis
embedding_config = IntxWeightOnlyConfig(
weight_dtype=torch.int8,
granularity=PerAxis(0),
version=2,
)
linear_config = Int8DynamicActivationIntxWeightConfig(
weight_dtype=torch.int4,
weight_granularity=PerGroup(32),
version=2,
)
quant_config = ModuleFqnToConfig({{"_default": linear_config, "model.embed_tokens": embedding_config}})
quantization_config = TorchAoConfig(quant_type=quant_config, include_input_output_embeddings=True, modules_to_not_convert=[])
quantized_model = AutoModelForCausalLM.from_pretrained(model_to_quantize, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Push to hub
USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{USER_ID}/{MODEL_NAME}-INT8-INT4"
quantized_model.push_to_hub(save_to, safe_serialization=False)
tokenizer.push_to_hub(save_to)
# Manual Testing
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
{
"role": "system",
"content": "",
},
{"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
templated_prompt,
return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])
Note: to push_to_hub
you need to run
pip install -U "huggingface_hub[cli]"
huggingface-cli login
and use a token with write access, from https://huggingface.co/settings/tokens
Model Quality
We rely on lm-evaluation-harness to evaluate the quality of the quantized model. Here we only run on mmlu for sanity check.
Benchmark | ||
---|---|---|
microsoft/Phi-4-mini-instruct | metascroy/Phi-4-mini-instruct-INT8-INT4 | |
mmlu | To be filled | To be filled |
Reproduce Model Quality Results
Need to install lm-eval from source: https://github.com/EleutherAI/lm-evaluation-harness#install
baseline
lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks mmlu --device cuda:0 --batch_size 8
INT8-INT4
export MODEL=metascroy/Phi-4-mini-instruct-INT8-INT4
lm_eval --model hf --model_args pretrained=$MODEL --tasks mmlu --device cuda:0 --batch_size 8
Exporting to ExecuTorch
We can run the quantized model on a mobile phone using ExecuTorch. Once ExecuTorch is set-up, exporting and running the model on device is a breeze.
ExecuTorch's LLM export scripts require the checkpoint keys and parameters have certain names, which differ from those used in Hugging Face. So we first use a conversion script that converts the Hugging Face checkpoint key names to ones that ExecuTorch expects:
python -m executorch.examples.models.[TODO: USE CORRECT MODEL].convert_weights $(hf download metascroy/Phi-4-mini-instruct-INT8-INT4) pytorch_model_converted.bin
Once we have the checkpoint, we export it to ExecuTorch with the XNNPACK backend as follows. (ExecuTorch LLM export script requires config.json have certain key names. The correct config to use for the LLM export script is located at [TODO: fill in, e.g., examples/models/qwen3/config/4b_config.json] within the ExecuTorch repo.)
[TODO: fix command below where necessary]
python -m executorch.examples.models.llama.export_llama --model "qwen3_4b" --checkpoint pytorch_model_converted.bin --params examples/models/qwen3/config/4b_config.json --output_name="model.pte" -kv --use_sdpa_with_kv_cache -X --xnnpack-extended-ops --max_context_length 1024 --max_seq_length 1024 --dtype fp32 --metadata '{"get_bos_id":199999, "get_eos_ids":[200020,199999]}'
After that you can run the model in a mobile app (see Running in a mobile app).
Paper: TorchAO: PyTorch-Native Training-to-Serving Model Optimization
The model's quantization is powered by TorchAO, a framework presented in the paper TorchAO: PyTorch-Native Training-to-Serving Model Optimization.
Abstract: We present TorchAO, a PyTorch-native model optimization framework leveraging quantization and sparsity to provide an end-to-end, training-to-serving workflow for AI models. TorchAO supports a variety of popular model optimization techniques, including FP8 quantized training, quantization-aware training (QAT), post-training quantization (PTQ), and 2:4 sparsity, and leverages a novel tensor subclass abstraction to represent a variety of widely-used, backend agnostic low precision data types, including INT4, INT8, FP8, MXFP4, MXFP6, and MXFP8. TorchAO integrates closely with the broader ecosystem at each step of the model optimization pipeline, from pre-training (TorchTitan) to fine-tuning (TorchTune, Axolotl) to serving (HuggingFace, vLLM, SGLang, ExecuTorch), connecting an otherwise fragmented space in a single, unified workflow. TorchAO has enabled recent launches of the quantized Llama 3.2 1B/3B and LlamaGuard3-8B models and is open-source at this https URL .
Resources
- Official TorchAO GitHub Repository: https://github.com/pytorch/ao
- TorchAO Documentation: https://docs.pytorch.org/ao/stable/index.html
Disclaimer
PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein.
- Downloads last month
- 27
Model tree for metascroy/Phi-4-mini-instruct-INT8-INT4
Base model
microsoft/Phi-4-mini-instruct