File size: 4,316 Bytes
293e121
 
 
 
 
 
 
c480a01
293e121
 
06af90f
293e121
 
 
06af90f
293e121
c480a01
 
293e121
979d481
293e121
979d481
f96640d
979d481
 
 
 
 
 
 
 
 
 
 
 
293e121
 
c480a01
293e121
 
 
 
c480a01
979d481
c480a01
293e121
c480a01
293e121
c480a01
 
293e121
c480a01
293e121
 
c480a01
 
 
 
293e121
c480a01
293e121
f96640d
 
293e121
 
c480a01
293e121
 
 
 
 
c480a01
293e121
c480a01
293e121
c480a01
293e121
c480a01
293e121
c480a01
293e121
 
c480a01
293e121
c480a01
293e121
c480a01
293e121
c480a01
293e121
c480a01
293e121
 
 
c480a01
293e121
c480a01
 
 
 
293e121
c480a01
293e121
c480a01
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
library_name: transformers
pipeline_tag: text-generation
tags:
- glm4_moe
- AWQ
- FP16Mix
- quantization fix
- vLLM
base_model:
  - zai-org/GLM-4.5-Air
base_model_relation: quantized
---
# GLM-4.5-Air-AWQ-FP16Mix
Base model: [zai-org/GLM-4.5-Air](https://huggingface.co/zai-org/GLM-4.5-Air)

### 【vLLM Single Node with 8 GPUs Startup Command】
<i>Note: You must use `--enable-expert-parallel` to start this model, otherwise the expert tensor TP will not divide evenly. This is required even for 2 GPUs.</i>
```
CONTEXT_LENGTH=32768

vllm serve \
    QuantTrio/GLM-4.5-Air-AWQ-FP16Mix \
    --served-model-name GLM-4.5-Air-AWQ-FP16Mix \
    --enable-expert-parallel \
    --swap-space 16 \
    --max-num-seqs 512 \
    --max-model-len $CONTEXT_LENGTH \
    --max-seq-len-to-capture $CONTEXT_LENGTH \
    --gpu-memory-utilization 0.9 \
    --tensor-parallel-size 8 \
    --trust-remote-code \
    --disable-log-requests \
    --host 0.0.0.0 \
    --port 8000
```

### 【Dependencies】
```
vllm==0.10.0
```

### 【❗❗Temporary Patch for vllm==0.10.0❗❗】
The `awq_marlin` module in `vllm` misses checking the `modules_to_not_convert` parameter when loading AWQ-MoE modules, which causes mixed quantization of MoE to fail or report errors.  
Refer to: [[Issue #21888]](https://github.com/vllm-project/vllm/pull/21888)

Before the PR is merged, temporarily replace `awq_marlin.py` in `vllm/model_executor/layers/quantization/awq_marlin.py`.

### 【Model Update Date】
```
2025-07-30
1. Initial commit
```

### 【Model Files】
| File Size | Last Updated |
|-----------|--------------|
| `69GB`    | `2025-07-30` |

### 【Model Download】
```python
from huggingface_hub import snapshot_download
snapshot_download('QuantTrio/GLM-4.5-Air-AWQ-FP16Mix', cache_dir="your_local_path")
```

### 【Overview】
# GLM-4.5

<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
</div>

<p align="center">
    👋 Join our <a href="https://github.com/zai-org/GLM-4.5/blob/main/resources/WECHAT.md" target="_blank"> WeChat group </a>.
    <br>
    📖 Read the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank"> technical blog </a>.
    <br>
    📍 Access GLM-4.5 API via the <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5"> ZhipuAI Open Platform </a>.
    <br>
    👉 Try it online at <a href="https://chat.z.ai" >GLM-4.5 </a>.
</p>

## Model Introduction

The **GLM-4.5** model series is a foundation model designed for agents. GLM-4.5 has **355 billion** total parameters, of which **32 billion** are active. GLM-4.5-Air adopts a more compact design with **106 billion** total parameters and **12 billion** active parameters. The GLM-4.5 models unify reasoning, encoding, and agent capabilities to meet the complex demands of agent applications.

Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that offer two modes: a *thinking mode* for complex reasoning and tool use, and a *non-thinking mode* for instant responses.

We have open-sourced the base models, hybrid reasoning models, and FP8 versions of GLM-4.5 and GLM-4.5-Air. They are released under the MIT license and can be used for commercial purposes and secondary development.

In our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieved an excellent score of **63.2**, ranking **3rd** among all proprietary and open-source models. Notably, GLM-4.5-Air maintained strong efficiency while achieving a competitive score of **59.8**.

![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png)

For more detailed evaluation results, demo cases, and technical information, please visit our [technical blog](https://z.ai/blog/glm-4.5). The full technical report will be released soon.

Model code, tool parsers, and inference parsers can be found in the following implementations:
- [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe)
- [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py)
- [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py)

## Quick Start

Please refer to our [GitHub project](https://github.com/zai-org/GLM-4.5).