--- library_name: transformers tags: - torchao - phi - phi4 - nlp - code - math - chat - conversational license: mit language: - multilingual base_model: - microsoft/Phi-4-mini-instruct pipeline_tag: text-generation --- [Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) is quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) with 8-bit embeddings, and 8-bit dynamic activation with int4 weights (8da4w), by PyTorch team. You can export the quantized model to an [ExecuTorch](https://github.com/pytorch/executorch) pte file, or use the [quantized pte](https://huggingface.co/pytorch/Phi-4-mini-instruct-8da4w/blob/main/phi4-mini-8da4w.pte) file directly to run on a mobile device, see [Running in a mobile app](#running-in-a-mobile-app). # Quantization Recipe First need to install the required packages: ``` pip install git+https://github.com/huggingface/transformers@main pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126 ``` ## Untie Embedding Weights Before quantization, since we need quantize input embedding and unembedding (lm_head) layer which are tied, but we want to quantize them separately, we first need to untie the model: ``` from transformers import ( AutoModelForCausalLM, AutoProcessor, AutoTokenizer, ) import torch model_id = "microsoft/Phi-4-mini-instruct" untied_model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_id) print(untied_model) from transformers.modeling_utils import find_tied_parameters print("tied weights:", find_tied_parameters(untied_model)) if getattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings"): setattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings", False) untied_model._tied_weights_keys = [] untied_model.lm_head.weight = torch.nn.Parameter(untied_model.lm_head.weight.clone()) print("tied weights:", find_tied_parameters(untied_model)) USER_ID = "YOUR_USER_ID" MODEL_NAME = model_id.split("/")[-1] save_to = f"{USER_ID}/{MODEL_NAME}-untied-weights" untied_model.push_to_hub(save_to) tokenizer.push_to_hub(save_to) ``` ## Quantization We used following code to get the quantized model: ``` from transformers import ( AutoModelForCausalLM, AutoProcessor, AutoTokenizer, TorchAoConfig, ) from torchao.quantization.quant_api import ( IntxWeightOnlyConfig, Int8DynamicActivationIntxWeightConfig, AOPerModuleConfig, quantize_, ) from torchao.quantization.granularity import PerGroup, PerAxis import torch # we start from the model with untied weights model_id = "microsoft/Phi-4-mini-instruct" USER_ID = "YOUR_USER_ID" MODEL_NAME = model_id.split("/")[-1] untied_model_id = f"{USER_ID}/{MODEL_NAME}-untied-weights" embedding_config = IntxWeightOnlyConfig( weight_dtype=torch.int8, granularity=PerAxis(0), ) linear_config = Int8DynamicActivationIntxWeightConfig( weight_dtype=torch.int4, weight_granularity=PerGroup(32), weight_scale_dtype=torch.bfloat16, ) quant_config = AOPerModuleConfig({"_default": linear_config, "model.embed_tokens": embedding_config}) quantization_config = TorchAoConfig(quant_type=quant_config, include_embedding=True, untie_embedding_weights=True, modules_to_not_convert=[]) quantized_model = AutoModelForCausalLM.from_pretrained(untied_model_id, torch_dtype=torch.float32, device_map="auto", quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_id) # Push to hub MODEL_NAME = model_id.split("/")[-1] save_to = f"{USER_ID}/{MODEL_NAME}-untied-8da4w" quantized_model.push_to_hub(save_to, safe_serialization=False) tokenizer.push_to_hub(save_to) # Manual testing prompt = "Hey, are you conscious? Can you talk to me?" messages = [ { "role": "system", "content": "", }, {"role": "user", "content": prompt}, ] templated_prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) print("Prompt:", prompt) print("Templated prompt:", templated_prompt) inputs = tokenizer( templated_prompt, return_tensors="pt", ).to("cuda") generated_ids = quantized_model.generate(**inputs, max_new_tokens=128) output_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print("Response:", output_text[0][len(prompt):]) # Save to disk state_dict = quantized_model.state_dict() torch.save(state_dict, "phi4-mini-8da4w.bin") ``` The response from the manual testing is: ``` Hello! As an AI, I don't have consciousness in the way humans do, but I am fully operational and here to assist you. How can I help you today? ``` # Model Quality We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model. Need to install lm-eval from source: https://github.com/EleutherAI/lm-evaluation-harness#install ## baseline ``` lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 64 ``` ## int8 dynamic activation and int4 weight quantization (8da4w) ``` lm_eval --model hf --model_args pretrained=pytorch/Phi-4-mini-instruct-8da4w --tasks hellaswag --device cuda:0 --batch_size 64 ``` | Benchmark | | | |----------------------------------|-------------|-------------------| | | Phi-4 mini-Ins | phi4-mini-8da4w| | **Popular aggregated benchmark** | | | | mmlu (0 shot) | 66.73 | 63.11 | | mmlu_pro (5-shot) | 44.71 | 35.31 | | **Reasoning** | | | | arc_challenge | 56.91 | 55.12 | | gpqa_main_zeroshot | 30.13 | 29.02 | | hellaswag | 54.57 | 53.23 | | openbookqa | 33.00 | 32.40 | | piqa (0-shot) | 77.64 | 76.66 | | siqa | 49.59 | 47.08 | | truthfulqa_mc2 (0-shot) | 48.39 | 47.99 | | winogrande (0-shot) | 71.11 | 70.17 | | **Multilingual** | | | | mgsm_en_cot_en | 60.80 | 58.8 | | **Math** | | | | gsm8k (5-shot) | 81.88 | 70.43 | | Mathqa (0-shot) | 42.31 | 41.57 | | **Overall** | 55.21 | 52.38 | # Exporting to ExecuTorch We can run the quantized model on a mobile phone using [ExecuTorch](https://github.com/pytorch/executorch). Once ExecuTorch is [set-up](https://pytorch.org/executorch/main/getting-started.html), exporting and running the model on device is a breeze. We first convert the quantized checkpoint to one ExecuTorch's LLM export script expects by renaming some of the checkpoint keys. The following script does this for you. We have uploaded phi4-mini-8da4w-converted.bin here for convenience. ``` python -m executorch.examples.models.phi_4_mini.convert_weights pytorch_model.bin phi4-mini-8da4w-converted.bin ``` Once the checkpoint is converted, we can export to ExecuTorch's PTE format with the XNNPACK delegate. ``` PARAMS="executorch/examples/models/phi_4_mini/config.json" python -m executorch.examples.models.llama.export_llama \ --model "phi_4_mini" \ --checkpoint "phi4-mini-8da4w-converted.bin" \ --params "$PARAMS" \ -kv \ --use_sdpa_with_kv_cache \ -X \ --metadata '{"get_bos_id":199999, "get_eos_ids":[200020,199999]}' \ --output_name="phi4-mini-8da4w.pte" ``` ## Running in a mobile app The PTE file can be run with ExecuTorch on a mobile phone. See the [instructions](https://pytorch.org/executorch/main/llm/llama-demo-ios.html) for doing this in iOS. On iPhone 15 Pro, the model runs at 17.3 tokens/sec and uses 3206 Mb of memory. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66049fc71116cebd1d3bdcf4/521rXwIlYS9HIAEBAPJjw.png) # Disclaimer PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein.