File size: 8,346 Bytes
2b41a4f
 
dde60eb
 
e000da2
 
 
 
 
 
 
 
 
 
533e12f
 
e000da2
2b41a4f
 
53e8363
 
 
a175b5e
d91159a
a175b5e
 
 
0369ded
 
 
 
 
 
 
 
 
 
 
 
e6eb920
0369ded
 
 
 
 
 
 
 
 
e6eb920
0369ded
 
 
 
 
 
 
 
 
 
 
 
44ba3bc
 
 
 
7067377
 
 
 
44ba3bc
 
7067377
44ba3bc
d8b6e3c
 
44ba3bc
7067377
44ba3bc
 
0369ded
b92ceeb
0369ded
 
 
 
7067377
 
 
 
44ba3bc
 
 
7067377
44ba3bc
e6eb920
 
 
 
b92ceeb
 
 
736f9a9
 
 
 
b92ceeb
 
 
 
 
 
 
 
 
 
 
 
 
 
44ba3bc
b92ceeb
 
 
 
 
 
 
 
 
 
 
 
 
 
d8b6e3c
 
44ba3bc
 
cd7275a
 
 
7067377
cd7275a
 
44ba3bc
 
 
 
53e8363
 
ec7de9d
44ba3bc
7067377
44ba3bc
 
ec7de9d
44ba3bc
 
 
 
 
 
 
7067377
44ba3bc
 
 
 
 
 
 
 
6d484ee
44ba3bc
6d484ee
 
44ba3bc
6d484ee
 
 
 
 
 
 
 
44ba3bc
a9124af
44ba3bc
6d484ee
 
44ba3bc
 
 
 
4b60089
 
44ba3bc
4b60089
 
b92ceeb
d8b6e3c
b92ceeb
44ba3bc
d0dd119
7564365
b92ceeb
44ba3bc
 
 
d8b6e3c
44ba3bc
 
 
 
38d4825
44ba3bc
b92ceeb
eea6e8b
38d4825
d0dd119
742853f
 
27ac605
eef2ace
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
---
library_name: transformers
tags:
- torchao
- phi
- phi4
- nlp
- code
- math
- chat
- conversational
license: mit
language:
- multilingual
base_model:
- microsoft/Phi-4-mini-instruct
pipeline_tag: text-generation
---

# Quantization Recipe

First need to install the required packages:
```
pip install git+https://github.com/huggingface/transformers@main
pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126
```

## Untie Embedding Weights
Before quantization, since we need quantize input embedding and unembedding (lm_head) layer which are tied, we first need to untie the model:

```
from transformers import (
  AutoModelForCausalLM,
  AutoProcessor,
  AutoTokenizer,
)
import torch

model_id = "microsoft/Phi-4-mini-instruct"
untied_model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)

print(untied_model)
from transformers.modeling_utils import find_tied_parameters
print("tied weights:", find_tied_parameters(untied_model))
if getattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings"):
    setattr(untied_model.config.get_text_config(decoder=True), "tie_word_embeddings", False)

untied_model._tied_weights_keys = []
untied_model.lm_head.weight = torch.nn.Parameter(untied_model.lm_head.weight.clone())

print("tied weights:", find_tied_parameters(untied_model))

USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
save_to = f"{USER_ID}/{MODEL_NAME}-untied-weights"
untied_model.push_to_hub(save_to)
tokenizer.push_to_hub(save_to)
```

## Quantization

We used following code to get the quantized model:

```
from transformers import (
  AutoModelForCausalLM,
  AutoProcessor,
  AutoTokenizer,
  TorchAoConfig,
)
from torchao.quantization.quant_api import (
    IntxWeightOnlyConfig,
    Int8DynamicActivationIntxWeightConfig,
    AOPerModuleConfig,
    quantize_,
)
from torchao.quantization.granularity import PerGroup, PerAxis
import torch

# we start from the model with untied weights
model_id = "microsoft/Phi-4-mini-instruct"
USER_ID = "YOUR_USER_ID"
MODEL_NAME = model_id.split("/")[-1]
untied_model_id = f"{USER_ID}/{MODEL_NAME}-untied-weights"

embedding_config = IntxWeightOnlyConfig(
    weight_dtype=torch.int8,
    granularity=PerAxis(0),
)
linear_config = Int8DynamicActivationIntxWeightConfig(
    weight_dtype=torch.int4,
    weight_granularity=PerGroup(32),
    weight_scale_dtype=torch.bfloat16,
)

quant_config = AOPerModuleConfig({"_default": linear_config, "model.embed_tokens": embedding_config})
quantization_config = TorchAoConfig(quant_type=quant_config, include_embedding=True, untie_embedding_weights=True, modules_to_not_convert=[])
quantized_model = AutoModelForCausalLM.from_pretrained(untied_model_id, torch_dtype=torch.float32, device_map="auto", quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)

# Push to hub
USER_ID = "YOUR_USER_ID"
save_to = f"{USER_ID}/phi4-mini-8dq4w"
quantized_model.push_to_hub(save_to, safe_serialization=False)
tokenizer.push_to_hub(save_to)

# Manual testing
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
    {
        "role": "system",
        "content": "",
    },
    {"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
    templated_prompt,
    return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
    generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])

# Save to disk
state_dict = quantized_model.state_dict()
torch.save(state_dict, "phi4-mini-8dq4w.bin")

```

The response from the manual testing is:

```
Hello! As an AI, I don't have consciousness in the way humans do, but I am fully operational and here to assist you. How can I help you today?
```

# Model Quality

We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model.

Need to install lm-eval from source: https://github.com/EleutherAI/lm-evaluation-harness#install

## baseline
```
lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 64
```

## 8dq4w
```
import lm_eval
from lm_eval import evaluator
from lm_eval.utils import (
    make_table,
)

lm_eval_model = lm_eval.models.huggingface.HFLM(pretrained=quantized_model, batch_size=64)
results = evaluator.simple_evaluate(
    lm_eval_model, tasks=["hellaswag"], device="cuda:0", batch_size="auto"
)
print(make_table(results))
```

| Benchmark                        |             |                   |
|----------------------------------|-------------|-------------------|
|                                  | Phi-4 mini-Ins | phi4-mini-8dq4w| 
| **Popular aggregated benchmark** |             |                   |
| mmlu (0 shot)                    | 66.73       | 63.11             |
| mmlu_pro (5-shot)                | 44.71       | 35.31             |
| **Reasoning**                    |             |                   |
| arc_challenge                    | 56.91       | 55.12             |
| gpqa_main_zeroshot               | 30.13       | 29.02             |
| hellaswag                        | 54.57       | 53.23             |
| openbookqa                       | 33.00       | 32.40             |
| piqa (0-shot)                    | 77.64       | 76.66             |
| siqa                             | 49.59       | 47.08             |
| truthfulqa_mc2 (0-shot)          | 48.39       | 47.99             |
| winogrande (0-shot)              | 71.11       | 70.17             |
| **Multilingual**                 |             |                   |
| mgsm_en_cot_en                   | 60.80       | 58.8              |
| **Math**                         |             |                   |
| gsm8k (5-shot)                   | 81.88       | 70.43             |
| Mathqa (0-shot)                  | 42.31       | 41.57             |


# Exporting to ExecuTorch

We can run the quantized model on a mobile phone using [ExecuTorch](https://github.com/pytorch/executorch).
Once ExecuTorch is [set-up](https://pytorch.org/executorch/main/getting-started.html), exporting and running the model on device is a breeze.

We first convert the quantized checkpoint to one ExecuTorch's LLM export script expects by renaming some of the checkpoint keys.
The following script does this for you.
```
python -m executorch.examples.models.phi_4_mini.convert_weights phi4-mini-8dq4w.bin phi4-mini-8dq4w-converted.bin
```

Once the checkpoint is converted, we can export to ExecuTorch's PTE format with the XNNPACK delegate.

```
PARAMS="executorch/examples/models/phi_4_mini/config.json"
python -m executorch.examples.models.llama.export_llama \
  --model "phi_4_mini" \
  --checkpoint "phi4-mini-8dq4w-converted.bin" \
  --params "$PARAMS" \
  -kv \
  --use_sdpa_with_kv_cache \
  -X \
  --metadata '{"get_bos_id":199999, "get_eos_ids":[200020,199999]}' \
  --output_name="phi4-mini-8dq4w.pte"
```

## Running in a mobile app
The PTE file can be run with ExecuTorch on a mobile phone.  See the [instructions](https://pytorch.org/executorch/main/llm/llama-demo-ios.html) for doing this in iOS.
On iPhone 15 Pro, the model runs at 17.3 tokens/sec and uses 3206 Mb of memory. 

![image/png](https://cdn-uploads.huggingface.co/production/uploads/66049fc71116cebd1d3bdcf4/AEdAJjGK2lED7tr6seWGf.png)

# Disclaimer
PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations.

Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein.