mikasenghaas commited on
Commit
9da3a96
·
verified ·
1 Parent(s): 3d55fef

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ library_name: transformers
6
+ license: mit
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ # GLM-4.5-Air
11
+
12
+ <div align="center">
13
+ <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
14
+ </div>
15
+ <p align="center">
16
+ 👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
17
+ <br>
18
+ 📖 Check out the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank">technical blog</a>, <a href="https://arxiv.org/abs/2508.06471" target="_blank">technical report</a>, and <a href="https://zhipu-ai.feishu.cn/wiki/Gv3swM0Yci7w7Zke9E0crhU7n7D" target="_blank">Zhipu AI technical documentation</a>.
19
+ <br>
20
+ 📍 Use GLM-4.5 API services on <a href="https://docs.z.ai/guides/llm/glm-4.5">Z.ai API Platform (Global)</a> or <br> <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5">Zhipu AI Open Platform (Mainland China)</a>.
21
+ <br>
22
+ 👉 One click to <a href="https://chat.z.ai">GLM-4.5</a>.
23
+ </p>
24
+
25
+ ## Model Introduction
26
+
27
+ The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
28
+
29
+ Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.
30
+
31
+ We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.
32
+
33
+ As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency.
34
+
35
+ ![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png)
36
+
37
+ For more eval results, show cases, and technical details, please visit
38
+ our [technical blog](https://z.ai/blog/glm-4.5) or [technical report](https://huggingface.co/papers/2508.06471).
39
+
40
+ The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py).
41
+
42
+ ## Quick Start
43
+
44
+ Please refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.
chat_template.jinja ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [gMASK]<sop>
2
+ {%- if tools -%}
3
+ <|system|>
4
+ # Tools
5
+
6
+ You may call one or more functions to assist with the user query.
7
+
8
+ You are provided with function signatures within <tools></tools> XML tags:
9
+ <tools>
10
+ {% for tool in tools %}
11
+ {{ tool | tojson(ensure_ascii=False) }}
12
+ {% endfor %}
13
+ </tools>
14
+
15
+ For each function call, output the function name and arguments within the following XML format:
16
+ <tool_call>{function-name}
17
+ <arg_key>{arg-key-1}</arg_key>
18
+ <arg_value>{arg-value-1}</arg_value>
19
+ <arg_key>{arg-key-2}</arg_key>
20
+ <arg_value>{arg-value-2}</arg_value>
21
+ ...
22
+ </tool_call>{%- endif -%}
23
+ {%- macro visible_text(content) -%}
24
+ {%- if content is string -%}
25
+ {{- content }}
26
+ {%- elif content is iterable and content is not mapping -%}
27
+ {%- for item in content -%}
28
+ {%- if item is mapping and item.type == 'text' -%}
29
+ {{- item.text }}
30
+ {%- elif item is string -%}
31
+ {{- item }}
32
+ {%- endif -%}
33
+ {%- endfor -%}
34
+ {%- else -%}
35
+ {{- content }}
36
+ {%- endif -%}
37
+ {%- endmacro -%}
38
+ {%- set ns = namespace(last_user_index=-1) %}
39
+ {%- for m in messages %}
40
+ {%- if m.role == 'user' %}
41
+ {% set ns.last_user_index = loop.index0 -%}
42
+ {%- endif %}
43
+ {%- endfor %}
44
+ {% for m in messages %}
45
+ {%- if m.role == 'user' -%}<|user|>
46
+ {{ visible_text(m.content) }}
47
+ {{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
48
+ {%- elif m.role == 'assistant' -%}
49
+ <|assistant|>
50
+ {%- set reasoning_content = '' %}
51
+ {%- set content = visible_text(m.content) %}
52
+ {%- if m.reasoning_content is string %}
53
+ {%- set reasoning_content = m.reasoning_content %}
54
+ {%- else %}
55
+ {%- if '</think>' in content %}
56
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
57
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
58
+ {%- endif %}
59
+ {%- endif %}
60
+ {%- if loop.index0 > ns.last_user_index and reasoning_content -%}
61
+ {{ '\n<think>' + reasoning_content.strip() + '</think>'}}
62
+ {%- else -%}
63
+ {{ '\n<think></think>' }}
64
+ {%- endif -%}
65
+ {%- if content.strip() -%}
66
+ {{ '\n' + content.strip() }}
67
+ {%- endif -%}
68
+ {% if m.tool_calls %}
69
+ {% for tc in m.tool_calls %}
70
+ {%- if tc.function %}
71
+ {%- set tc = tc.function %}
72
+ {%- endif %}
73
+ {{ '\n<tool_call>' + tc.name }}
74
+ {% set _args = tc.arguments %}
75
+ {% for k, v in _args.items() %}
76
+ <arg_key>{{ k }}</arg_key>
77
+ <arg_value>{{ v | tojson(ensure_ascii=False) if v is not string else v }}</arg_value>
78
+ {% endfor %}
79
+ </tool_call>{% endfor %}
80
+ {% endif %}
81
+ {%- elif m.role == 'tool' -%}
82
+ {%- if m.content is string -%}
83
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
84
+ {{- '<|observation|>' }}
85
+ {%- endif %}
86
+ {{- '\n<tool_response>\n' }}
87
+ {{- m.content }}
88
+ {{- '\n</tool_response>' }}
89
+ {%- else -%}
90
+ <|observation|>{% for tr in m.content %}
91
+
92
+ <tool_response>
93
+ {{ tr.output if tr.output is defined else tr }}
94
+ </tool_response>{% endfor -%}
95
+ {% endif -%}
96
+ {%- elif m.role == 'system' -%}
97
+ <|system|>
98
+ {{ visible_text(m.content) }}
99
+ {%- endif -%}
100
+ {%- endfor -%}
101
+ {%- if add_generation_prompt -%}
102
+ <|assistant|>{{- '\n<think></think>' if (enable_thinking is defined and not enable_thinking) else '' -}}
103
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Glm4MoeForCausalLM"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_glm4_moe.Glm4MoeConfig",
7
+ "AutoModelForCausalLM": "modeling_glm4_moe.Glm4MoeForCausalLM",
8
+ "AutoModel": "modeling_glm4_moe.Glm4MoeModel"
9
+ },
10
+ "attention_bias": true,
11
+ "attention_dropout": 0.0,
12
+ "pad_token_id": 151329,
13
+ "eos_token_id": [
14
+ 151329,
15
+ 151336,
16
+ 151338
17
+ ],
18
+ "head_dim": 64,
19
+ "hidden_act": "silu",
20
+ "hidden_size": 1024,
21
+ "partial_rotary_factor": 0.5,
22
+ "initializer_range": 0.02,
23
+ "intermediate_size": 2048,
24
+ "max_position_embeddings": 131072,
25
+ "model_type": "glm4_moe",
26
+ "moe_intermediate_size": 256,
27
+ "norm_topk_prob": true,
28
+ "num_attention_heads": 16,
29
+ "n_group": 1,
30
+ "topk_group": 1,
31
+ "n_routed_experts": 8,
32
+ "n_shared_experts": 1,
33
+ "routed_scaling_factor": 1.0,
34
+ "num_experts_per_tok": 4,
35
+ "first_k_dense_replace": 1,
36
+ "num_hidden_layers": 24,
37
+ "num_key_value_heads": 4,
38
+ "rms_norm_eps": 1e-05,
39
+ "rope_scaling": null,
40
+ "rope_theta": 1000000,
41
+ "num_nextn_predict_layers": 1,
42
+ "tie_word_embeddings": false,
43
+ "torch_dtype": "bfloat16",
44
+ "transformers_version": "4.54.0",
45
+ "use_cache": true,
46
+ "use_qk_norm": false,
47
+ "vocab_size": 151552
48
+ }
configuration_glm4_moe.py ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨��🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨��🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from src/transformers/models/glm4_moe/modular_glm4_moe.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_glm4_moe.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2025 The ZhipuAI Inc. team and HuggingFace Inc. team. All rights reserved.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+ from transformers.configuration_utils import PretrainedConfig
23
+ from transformers.modeling_rope_utils import rope_config_validation
24
+
25
+
26
+ class Glm4MoeConfig(PretrainedConfig):
27
+ r"""
28
+ This is the configuration class to store the configuration of a [`Glm4MoeModel`]. It is used to instantiate a
29
+ Glm4Moe model according to the specified arguments, defining the model architecture. Instantiating a configuration
30
+ with the defaults will yield a similar configuration to that of [THUDM/GLM-4-100B-A10B](https://huggingface.co/THUDM/GLM-4-100B-A10B).
31
+
32
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
33
+ documentation from [`PretrainedConfig`] for more information.
34
+
35
+
36
+ Args:
37
+ vocab_size (`int`, *optional*, defaults to 151552):
38
+ Vocabulary size of the Glm4Moe model. Defines the number of different tokens that can be represented by the
39
+ `inputs_ids` passed when calling [`Glm4MoeModel`]
40
+ hidden_size (`int`, *optional*, defaults to 4096):
41
+ Dimension of the hidden representations.
42
+ intermediate_size (`int`, *optional*, defaults to 10944):
43
+ Dimension of the MLP representations.
44
+ num_hidden_layers (`int`, *optional*, defaults to 46):
45
+ Number of hidden layers in the Transformer encoder.
46
+ num_attention_heads (`int`, *optional*, defaults to 96):
47
+ Number of attention heads for each attention layer in the Transformer encoder.
48
+ partial_rotary_factor (`float`, *optional*, defaults to 0.5):
49
+ The factor of the partial rotary position.
50
+ num_key_value_heads (`int`, *optional*, defaults to 8):
51
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
52
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
53
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
54
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
55
+ by meanpooling all the original heads within that group. For more details, check out [this
56
+ paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to `32`.
57
+
58
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
59
+ The non-linear activation function (function or string) in the decoder.
60
+ max_position_embeddings (`int`, *optional*, defaults to 131072):
61
+ The maximum sequence length that this model might ever be used with.
62
+ initializer_range (`float`, *optional*, defaults to 0.02):
63
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
65
+ The epsilon used by the rms normalization layers.
66
+ use_cache (`bool`, *optional*, defaults to `True`):
67
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
68
+ relevant if `config.is_decoder=True`.
69
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
70
+ Whether the model's input and output word embeddings should be tied.
71
+ rope_theta (`float`, *optional*, defaults to 10000.0):
72
+ The base period of the RoPE embeddings.
73
+ rope_scaling (`Dict`, *optional*):
74
+ Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
75
+ and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
76
+ accordingly.
77
+ Expected contents:
78
+ `rope_type` (`str`):
79
+ The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
80
+ 'llama3'], with 'default' being the original RoPE implementation.
81
+ `factor` (`float`, *optional*):
82
+ Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
83
+ most scaling types, a `factor` of x will enable the model to handle sequences of length x *
84
+ original maximum pre-trained length.
85
+ `original_max_position_embeddings` (`int`, *optional*):
86
+ Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
87
+ pretraining.
88
+ `attention_factor` (`float`, *optional*):
89
+ Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
90
+ computation. If unspecified, it defaults to value recommended by the implementation, using the
91
+ `factor` field to infer the suggested value.
92
+ `beta_fast` (`float`, *optional*):
93
+ Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
94
+ ramp function. If unspecified, it defaults to 32.
95
+ `beta_slow` (`float`, *optional*):
96
+ Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
97
+ ramp function. If unspecified, it defaults to 1.
98
+ `short_factor` (`list[float]`, *optional*):
99
+ Only used with 'longrope'. The scaling factor to be applied to short contexts (<
100
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
101
+ size divided by the number of attention heads divided by 2
102
+ `long_factor` (`list[float]`, *optional*):
103
+ Only used with 'longrope'. The scaling factor to be applied to long contexts (<
104
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
105
+ size divided by the number of attention heads divided by 2
106
+ `low_freq_factor` (`float`, *optional*):
107
+ Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
108
+ `high_freq_factor` (`float`, *optional*):
109
+ Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
110
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
111
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
112
+ attention_dropout (`float`, *optional*, defaults to 0.0):
113
+ The dropout ratio for the attention probabilities.
114
+ moe_intermediate_size (`int`, *optional*, defaults to 1408):
115
+ Intermediate size of the routed expert.
116
+ num_experts_per_tok (`int`, *optional*, defaults to 8):
117
+ number of experts per token.
118
+ n_shared_experts (`int`, *optional*, defaults to 1):
119
+ Number of shared experts.
120
+ n_routed_experts (`int`, *optional*, defaults to 128):
121
+ Number of routed experts.
122
+ routed_scaling_factor (`float`, *optional*, defaults to 1.0):
123
+ Scaling factor or routed experts.
124
+ n_group (`int`, *optional*, defaults to 1):
125
+ Number of groups for routed experts.
126
+ topk_group (`int`, *optional*, defaults to 1):
127
+ Number of selected groups for each token(for each token, ensuring the selected experts is only within `topk_group` groups).
128
+ first_k_dense_replace (`int`, *optional*, defaults to 1):
129
+ Number of dense layers in shallow layers(embed->dense->dense->...->dense->moe->moe...->lm_head).
130
+ \--k dense layers--/
131
+ norm_topk_prob (`bool`, *optional*, defaults to `True`):
132
+ Whether to normalize the topk probabilities.
133
+ use_qk_norm (`bool`, *optional*, defaults to `False`):
134
+ Whether to use query-key normalization in the attention
135
+ ```python
136
+ >>> from transformers import Glm4MoeModel, Glm4MoeConfig
137
+
138
+ >>> # Initializing a Glm4Moe style configuration
139
+ >>> configuration = Glm4MoeConfig()
140
+
141
+ >>> # Initializing a model from the GLM-4-MOE-100B-A10B style configuration
142
+ >>> model = Glm4MoeModel(configuration)
143
+
144
+ >>> # Accessing the model configuration
145
+ >>> configuration = model.config
146
+ ```"""
147
+
148
+ model_type = "glm4_moe"
149
+ keys_to_ignore_at_inference = ["past_key_values"]
150
+
151
+ # Default tensor parallel plan for base model `Glm4Moe`
152
+ base_model_tp_plan = {
153
+ "layers.*.self_attn.q_proj": "colwise",
154
+ "layers.*.self_attn.k_proj": "colwise",
155
+ "layers.*.self_attn.v_proj": "colwise",
156
+ "layers.*.self_attn.o_proj": "rowwise",
157
+ "layers.*.mlp.experts.*.gate_proj": "colwise",
158
+ "layers.*.mlp.experts.*.up_proj": "colwise",
159
+ "layers.*.mlp.experts.*.down_proj": "rowwise",
160
+ "layers.*.mlp.gate_proj": "colwise",
161
+ "layers.*.mlp.up_proj": "colwise",
162
+ "layers.*.mlp.down_proj": "rowwise",
163
+ }
164
+ base_model_pp_plan = {
165
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
166
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
167
+ "norm": (["hidden_states"], ["hidden_states"]),
168
+ }
169
+
170
+ def __init__(
171
+ self,
172
+ vocab_size=151552,
173
+ hidden_size=4096,
174
+ intermediate_size=10944,
175
+ num_hidden_layers=46,
176
+ num_attention_heads=96,
177
+ partial_rotary_factor=0.5,
178
+ num_key_value_heads=8,
179
+ hidden_act="silu",
180
+ max_position_embeddings=131072,
181
+ initializer_range=0.02,
182
+ rms_norm_eps=1e-5,
183
+ use_cache=True,
184
+ tie_word_embeddings=False,
185
+ rope_theta=10000.0,
186
+ rope_scaling=None,
187
+ attention_bias=False,
188
+ attention_dropout=0.0,
189
+ moe_intermediate_size=1408,
190
+ num_experts_per_tok=8,
191
+ n_shared_experts=1,
192
+ n_routed_experts=128,
193
+ routed_scaling_factor=1.0,
194
+ n_group=1,
195
+ topk_group=1,
196
+ first_k_dense_replace=1,
197
+ norm_topk_prob=True,
198
+ use_qk_norm=False,
199
+ **kwargs,
200
+ ):
201
+ self.vocab_size = vocab_size
202
+ self.max_position_embeddings = max_position_embeddings
203
+ self.hidden_size = hidden_size
204
+ self.intermediate_size = intermediate_size
205
+ self.num_hidden_layers = num_hidden_layers
206
+ self.num_attention_heads = num_attention_heads
207
+ self.partial_rotary_factor = partial_rotary_factor
208
+
209
+ self.num_key_value_heads = num_key_value_heads
210
+ self.hidden_act = hidden_act
211
+ self.initializer_range = initializer_range
212
+ self.rms_norm_eps = rms_norm_eps
213
+ self.use_cache = use_cache
214
+ self.rope_theta = rope_theta
215
+ self.rope_scaling = rope_scaling
216
+ self.attention_bias = attention_bias
217
+ self.attention_dropout = attention_dropout
218
+ # Validate the correctness of rotary position embeddings parameters
219
+ # BC: if there is a 'type' field, move it to 'rope_type'.
220
+ if self.rope_scaling is not None and "type" in self.rope_scaling:
221
+ self.rope_scaling["rope_type"] = self.rope_scaling["type"]
222
+ rope_config_validation(self)
223
+
224
+ # MoE arguments
225
+ self.moe_intermediate_size = moe_intermediate_size
226
+ self.num_experts_per_tok = num_experts_per_tok
227
+ self.n_group = n_group
228
+ self.topk_group = topk_group
229
+ self.n_shared_experts = n_shared_experts
230
+ self.n_routed_experts = n_routed_experts
231
+ self.routed_scaling_factor = routed_scaling_factor
232
+ self.first_k_dense_replace = first_k_dense_replace
233
+ self.norm_topk_prob = norm_topk_prob
234
+ self.use_qk_norm = use_qk_norm
235
+
236
+ super().__init__(
237
+ tie_word_embeddings=tie_word_embeddings,
238
+ **kwargs,
239
+ )
240
+
241
+
242
+ __all__ = ["Glm4MoeConfig"]
243
+
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "eos_token_id": [
4
+ 151329,
5
+ 151336,
6
+ 151338
7
+ ],
8
+ "pad_token_id": 151329,
9
+ "transformers_version": "4.54.0"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7385382d8a8eaae9721cef1adff52e17aa041442f8cb2a7d4404fddc59458005
3
+ size 1085351336
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
modeling_glm4_moe.py ADDED
@@ -0,0 +1,623 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨��🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨��🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from src/transformers/models/glm4_moe/modular_glm4_moe.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_glm4_moe.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2025 The ZhipuAI Inc. team and HuggingFace Inc. team. All rights reserved.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+ from typing import Callable, Optional, Union
23
+
24
+ import torch
25
+ import torch.nn.functional as F
26
+ from torch import nn
27
+
28
+ from transformers.activations import ACT2FN
29
+ from transformers.cache_utils import Cache, DynamicCache
30
+ from transformers.generation import GenerationMixin
31
+ from transformers.integrations import use_kernel_forward_from_hub
32
+ from transformers.masking_utils import create_causal_mask
33
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
34
+ from transformers.modeling_layers import GradientCheckpointingLayer
35
+ from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
36
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
37
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
38
+ from transformers.processing_utils import Unpack
39
+ from transformers.utils import TransformersKwargs, auto_docstring, can_return_tuple
40
+ from transformers.utils.deprecation import deprecate_kwarg
41
+ from transformers.utils.generic import check_model_inputs
42
+ from .configuration_glm4_moe import Glm4MoeConfig
43
+
44
+ from torchtitan.models.moe import MoE, MoEArgs
45
+
46
+
47
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
48
+ """
49
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
50
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
51
+ """
52
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
53
+ if n_rep == 1:
54
+ return hidden_states
55
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
56
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
57
+
58
+
59
+ def eager_attention_forward(
60
+ module: nn.Module,
61
+ query: torch.Tensor,
62
+ key: torch.Tensor,
63
+ value: torch.Tensor,
64
+ attention_mask: Optional[torch.Tensor],
65
+ scaling: float,
66
+ dropout: float = 0.0,
67
+ **kwargs: Unpack[TransformersKwargs],
68
+ ):
69
+ key_states = repeat_kv(key, module.num_key_value_groups)
70
+ value_states = repeat_kv(value, module.num_key_value_groups)
71
+
72
+ attn_weights = torch.matmul(query, key_states.transpose(2, 3)) * scaling
73
+ if attention_mask is not None:
74
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
75
+ attn_weights = attn_weights + causal_mask
76
+
77
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
78
+ attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
79
+ attn_output = torch.matmul(attn_weights, value_states)
80
+ attn_output = attn_output.transpose(1, 2).contiguous()
81
+
82
+ return attn_output, attn_weights
83
+
84
+
85
+ def rotate_half(x):
86
+ """Rotates half the hidden dims of the input."""
87
+ x1 = x[..., : x.shape[-1] // 2]
88
+ x2 = x[..., x.shape[-1] // 2 :]
89
+ return torch.cat((-x2, x1), dim=-1)
90
+
91
+
92
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
93
+ """Applies Rotary Position Embedding to the query and key tensors.
94
+
95
+ Args:
96
+ q (`torch.Tensor`): The query tensor.
97
+ k (`torch.Tensor`): The key tensor.
98
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
99
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
100
+ position_ids (`torch.Tensor`, *optional*):
101
+ Deprecated and unused.
102
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
103
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
104
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
105
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
106
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
107
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
108
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
109
+ Returns:
110
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
111
+ """
112
+ cos = cos.unsqueeze(unsqueeze_dim)
113
+ sin = sin.unsqueeze(unsqueeze_dim)
114
+
115
+ # Keep half or full tensor for later concatenation
116
+ rotary_dim = cos.shape[-1]
117
+ q_rot, q_pass = q[..., :rotary_dim], q[..., rotary_dim:]
118
+ k_rot, k_pass = k[..., :rotary_dim], k[..., rotary_dim:]
119
+
120
+ # Apply rotary embeddings on the first half or full tensor
121
+ q_embed = (q_rot * cos) + (rotate_half(q_rot) * sin)
122
+ k_embed = (k_rot * cos) + (rotate_half(k_rot) * sin)
123
+
124
+ # Concatenate back to full shape
125
+ q_embed = torch.cat([q_embed, q_pass], dim=-1)
126
+ k_embed = torch.cat([k_embed, k_pass], dim=-1)
127
+ return q_embed, k_embed
128
+
129
+
130
+ class Glm4MoeAttention(nn.Module):
131
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
132
+
133
+ def __init__(self, config: Glm4MoeConfig, layer_idx: Optional[int] = None):
134
+ super().__init__()
135
+ self.config = config
136
+ self.layer_idx = layer_idx
137
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
138
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
139
+ self.scaling = self.head_dim**-0.5
140
+ self.rope_scaling = config.rope_scaling
141
+ self.attention_dropout = config.attention_dropout
142
+ self.is_causal = True
143
+
144
+ self.q_proj = nn.Linear(
145
+ config.hidden_size, config.num_attention_heads * self.head_dim, bias=config.attention_bias
146
+ )
147
+ self.k_proj = nn.Linear(
148
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
149
+ )
150
+ self.v_proj = nn.Linear(
151
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
152
+ )
153
+ self.o_proj = nn.Linear(config.num_attention_heads * self.head_dim, config.hidden_size, bias=False)
154
+ self.use_qk_norm = config.use_qk_norm
155
+ if self.use_qk_norm:
156
+ self.q_norm = Glm4MoeRMSNorm(self.head_dim, eps=config.rms_norm_eps)
157
+ self.k_norm = Glm4MoeRMSNorm(self.head_dim, eps=config.rms_norm_eps)
158
+
159
+ @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58")
160
+ def forward(
161
+ self,
162
+ hidden_states: torch.Tensor,
163
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
164
+ attention_mask: Optional[torch.Tensor],
165
+ past_key_values: Optional[Cache] = None,
166
+ cache_position: Optional[torch.LongTensor] = None,
167
+ **kwargs: Unpack[FlashAttentionKwargs],
168
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor]]:
169
+ input_shape = hidden_states.shape[:-1]
170
+ hidden_shape = (*input_shape, -1, self.head_dim)
171
+
172
+ query_states = self.q_proj(hidden_states).view(hidden_shape)
173
+ key_states = self.k_proj(hidden_states).view(hidden_shape)
174
+ value_states = self.v_proj(hidden_states).view(hidden_shape)
175
+
176
+ if self.use_qk_norm: # main diff from Llama
177
+ query_states = self.q_norm(query_states)
178
+ key_states = self.k_norm(key_states)
179
+
180
+ query_states = query_states.transpose(1, 2)
181
+ key_states = key_states.transpose(1, 2)
182
+ value_states = value_states.transpose(1, 2)
183
+
184
+ cos, sin = position_embeddings
185
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
186
+
187
+ if past_key_values is not None:
188
+ # sin and cos are specific to RoPE models; position_ids needed for the static cache
189
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
190
+ key_states, value_states = past_key_values.update(key_states, value_states, self.layer_idx, cache_kwargs)
191
+
192
+ key_states = key_states.repeat_interleave(self.num_key_value_groups, dim=1)
193
+ value_states = value_states.repeat_interleave(self.num_key_value_groups, dim=1)
194
+ out = F.scaled_dot_product_attention(query_states, key_states, value_states, is_causal=True)
195
+ out = out.transpose(1, 2).contiguous() #.view(out.shape[0], out.shape[1], -1)
196
+ attn_output = out.view(out.shape[0], out.shape[1], -1)
197
+ attn_weights = None
198
+
199
+ # attn_output = attn_output.reshape(*input_shape, -1).contiguous()
200
+ attn_output = self.o_proj(attn_output)
201
+ return attn_output, attn_weights
202
+
203
+
204
+ class Glm4MoeMLP(nn.Module):
205
+ def __init__(self, config, hidden_size=None, intermediate_size=None):
206
+ super().__init__()
207
+ self.config = config
208
+ self.hidden_size = config.hidden_size if hidden_size is None else hidden_size
209
+ self.intermediate_size = config.intermediate_size if intermediate_size is None else intermediate_size
210
+
211
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
212
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
213
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
214
+ self.act_fn = ACT2FN[config.hidden_act]
215
+
216
+ def forward(self, x):
217
+ down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
218
+ return down_proj
219
+
220
+
221
+ class Glm4MoeTopkRouter(nn.Module):
222
+ def __init__(self, config: Glm4MoeConfig):
223
+ super().__init__()
224
+ self.config = config
225
+ self.top_k = config.num_experts_per_tok
226
+ self.n_routed_experts = config.n_routed_experts
227
+ self.routed_scaling_factor = config.routed_scaling_factor
228
+ self.n_group = config.n_group
229
+ self.topk_group = config.topk_group
230
+ self.norm_topk_prob = config.norm_topk_prob
231
+
232
+ self.weight = nn.Parameter(torch.empty((self.n_routed_experts, config.hidden_size)))
233
+ self.register_buffer("e_score_correction_bias", torch.zeros((self.n_routed_experts), dtype=torch.float32))
234
+
235
+ @torch.no_grad()
236
+ def get_topk_indices(self, scores):
237
+ scores_for_choice = scores.view(-1, self.n_routed_experts) + self.e_score_correction_bias.unsqueeze(0)
238
+ group_scores = (
239
+ scores_for_choice.view(-1, self.n_group, self.n_routed_experts // self.n_group)
240
+ .topk(2, dim=-1)[0]
241
+ .sum(dim=-1)
242
+ )
243
+ group_idx = torch.topk(group_scores, k=self.topk_group, dim=-1, sorted=False)[1]
244
+ group_mask = torch.zeros_like(group_scores)
245
+ group_mask.scatter_(1, group_idx, 1)
246
+ score_mask = (
247
+ group_mask.unsqueeze(-1)
248
+ .expand(-1, self.n_group, self.n_routed_experts // self.n_group)
249
+ .reshape(-1, self.n_routed_experts)
250
+ )
251
+ scores_for_choice = scores_for_choice.masked_fill(~score_mask.bool(), 0.0)
252
+ topk_indices = torch.topk(scores_for_choice, k=self.top_k, dim=-1, sorted=False)[1]
253
+ return topk_indices
254
+
255
+ def forward(self, hidden_states):
256
+ hidden_states = hidden_states.view(-1, self.config.hidden_size)
257
+ router_logits = F.linear(hidden_states.type(torch.float32), self.weight.type(torch.float32))
258
+ scores = router_logits.sigmoid()
259
+ topk_indices = self.get_topk_indices(scores)
260
+ topk_weights = scores.gather(1, topk_indices)
261
+ if self.norm_topk_prob:
262
+ denominator = topk_weights.sum(dim=-1, keepdim=True) + 1e-20
263
+ topk_weights /= denominator
264
+ topk_weights = topk_weights * self.routed_scaling_factor
265
+ return topk_indices, topk_weights
266
+
267
+
268
+ @use_kernel_forward_from_hub("RMSNorm")
269
+ class Glm4MoeRMSNorm(nn.Module):
270
+ def __init__(self, hidden_size, eps=1e-6):
271
+ """
272
+ Glm4MoeRMSNorm is equivalent to T5LayerNorm
273
+ """
274
+ super().__init__()
275
+ self.weight = nn.Parameter(torch.ones(hidden_size))
276
+ self.variance_epsilon = eps
277
+
278
+ def forward(self, hidden_states):
279
+ input_dtype = hidden_states.dtype
280
+ hidden_states = hidden_states.to(torch.float32)
281
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
282
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
283
+ return self.weight * hidden_states.to(input_dtype)
284
+
285
+ def extra_repr(self):
286
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
287
+
288
+
289
+ class Glm4MoeMoE(nn.Module):
290
+ """
291
+ A mixed expert module containing shared experts.
292
+ """
293
+
294
+ def __init__(self, config):
295
+ super().__init__()
296
+ self.config = config
297
+ self.experts = nn.ModuleList(
298
+ [
299
+ Glm4MoeMLP(config, intermediate_size=config.moe_intermediate_size)
300
+ for _ in range(config.n_routed_experts)
301
+ ]
302
+ )
303
+ self.gate = Glm4MoeTopkRouter(config)
304
+ self.shared_experts = Glm4MoeMLP(
305
+ config=config, intermediate_size=config.moe_intermediate_size * config.n_shared_experts
306
+ )
307
+
308
+ def moe(self, hidden_states: torch.Tensor, topk_indices: torch.Tensor, topk_weights: torch.Tensor):
309
+ r"""
310
+ CALL FOR CONTRIBUTION! I don't have time to optimise this right now, but expert weights need to be fused
311
+ to not have to do a loop here (deepseek has 256 experts soooo yeah).
312
+ """
313
+ final_hidden_states = torch.zeros_like(hidden_states, dtype=topk_weights.dtype)
314
+ expert_mask = torch.nn.functional.one_hot(topk_indices, num_classes=len(self.experts))
315
+ expert_mask = expert_mask.permute(2, 0, 1)
316
+
317
+ for expert_idx in range(len(self.experts)):
318
+ expert = self.experts[expert_idx]
319
+ mask = expert_mask[expert_idx]
320
+ token_indices, weight_indices = torch.where(mask)
321
+
322
+ if token_indices.numel() > 0:
323
+ expert_weights = topk_weights[token_indices, weight_indices]
324
+ expert_input = hidden_states[token_indices]
325
+ expert_output = expert(expert_input)
326
+ weighted_output = expert_output * expert_weights.unsqueeze(-1)
327
+ final_hidden_states.index_add_(0, token_indices, weighted_output)
328
+
329
+ # in original deepseek, the output of the experts are gathered once we leave this module
330
+ # thus the moe module is itelsf an IsolatedParallel module
331
+ # and all expert are "local" meaning we shard but we don't gather
332
+ return final_hidden_states.type(hidden_states.dtype)
333
+
334
+ def forward(self, hidden_states):
335
+ residuals = hidden_states
336
+ orig_shape = hidden_states.shape
337
+ topk_indices, topk_weights = self.gate(hidden_states)
338
+ hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
339
+ hidden_states = self.moe(hidden_states, topk_indices, topk_weights).view(*orig_shape)
340
+ hidden_states = hidden_states + self.shared_experts(residuals)
341
+ return hidden_states
342
+
343
+
344
+ class Glm4MoeDecoderLayer(GradientCheckpointingLayer):
345
+ def __init__(self, config: Glm4MoeConfig, layer_idx: int):
346
+ super().__init__()
347
+ self.hidden_size = config.hidden_size
348
+
349
+ self.self_attn = Glm4MoeAttention(config=config, layer_idx=layer_idx)
350
+
351
+ moe_args = MoEArgs(
352
+ num_experts=config.n_routed_experts,
353
+ num_shared_experts=config.n_shared_experts,
354
+ score_func="sigmoid",
355
+ route_norm=config.norm_topk_prob,
356
+ route_scale=config.routed_scaling_factor,
357
+ score_before_experts=False,
358
+ top_k=config.num_experts_per_tok,
359
+ use_grouped_mm=torch.cuda.get_device_capability(0)[0] >= 9,
360
+ load_balance_coeff=1e-3,
361
+ )
362
+
363
+ if layer_idx >= config.first_k_dense_replace:
364
+ self.mlp = MoE(moe_args, dim=config.hidden_size, hidden_dim=config.moe_intermediate_size)
365
+ else:
366
+ self.mlp = Glm4MoeMLP(config)
367
+
368
+ self.input_layernorm = Glm4MoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
369
+ self.post_attention_layernorm = Glm4MoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
370
+
371
+ @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58")
372
+ def forward(
373
+ self,
374
+ hidden_states: torch.Tensor,
375
+ attention_mask: Optional[torch.Tensor] = None,
376
+ position_ids: Optional[torch.LongTensor] = None,
377
+ past_key_values: Optional[Cache] = None,
378
+ use_cache: Optional[bool] = False,
379
+ cache_position: Optional[torch.LongTensor] = None,
380
+ position_embeddings: Optional[tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
381
+ **kwargs: Unpack[TransformersKwargs],
382
+ ) -> torch.Tensor:
383
+ residual = hidden_states
384
+ hidden_states = self.input_layernorm(hidden_states)
385
+ # Self Attention
386
+ hidden_states, _ = self.self_attn(
387
+ hidden_states=hidden_states,
388
+ attention_mask=attention_mask,
389
+ position_ids=position_ids,
390
+ past_key_values=past_key_values,
391
+ use_cache=use_cache,
392
+ cache_position=cache_position,
393
+ position_embeddings=position_embeddings,
394
+ **kwargs,
395
+ )
396
+ hidden_states = residual + hidden_states
397
+
398
+ # Fully Connected
399
+ residual = hidden_states
400
+ hidden_states = self.post_attention_layernorm(hidden_states)
401
+ hidden_states = self.mlp(hidden_states)
402
+ hidden_states = residual + hidden_states
403
+ return hidden_states
404
+
405
+
406
+ @auto_docstring
407
+ class Glm4MoePreTrainedModel(PreTrainedModel):
408
+ config: Glm4MoeConfig
409
+ base_model_prefix = "model"
410
+ supports_gradient_checkpointing = True
411
+ _no_split_modules = ["Glm4MoeDecoderLayer"]
412
+ _skip_keys_device_placement = ["past_key_values"]
413
+ _supports_flash_attn = True
414
+ _supports_sdpa = True
415
+ _supports_flex_attn = True
416
+ _can_compile_fullgraph = False
417
+ _supports_attention_backend = True
418
+ _can_record_outputs = {
419
+ "hidden_states": Glm4MoeDecoderLayer,
420
+ "attentions": Glm4MoeAttention,
421
+ }
422
+
423
+ def _init_weights(self, module):
424
+ super()._init_weights(module)
425
+ if isinstance(module, Glm4MoeTopkRouter):
426
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
427
+
428
+
429
+ class Glm4MoeRotaryEmbedding(nn.Module):
430
+ inv_freq: torch.Tensor # fix linting for `register_buffer`
431
+
432
+ def __init__(self, config: Glm4MoeConfig, device=None):
433
+ super().__init__()
434
+ # BC: "rope_type" was originally "type"
435
+ if hasattr(config, "rope_scaling") and isinstance(config.rope_scaling, dict):
436
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
437
+ else:
438
+ self.rope_type = "default"
439
+ self.max_seq_len_cached = config.max_position_embeddings
440
+ self.original_max_seq_len = config.max_position_embeddings
441
+
442
+ self.config = config
443
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
444
+
445
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
446
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
447
+ self.original_inv_freq = self.inv_freq
448
+
449
+ @torch.no_grad()
450
+ @dynamic_rope_update # power user: used with advanced RoPE types (e.g. dynamic rope)
451
+ def forward(self, x, position_ids):
452
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
453
+ position_ids_expanded = position_ids[:, None, :].float()
454
+
455
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
456
+ with torch.autocast(device_type=device_type, enabled=False): # Force float32
457
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
458
+ emb = torch.cat((freqs, freqs), dim=-1)
459
+ cos = emb.cos() * self.attention_scaling
460
+ sin = emb.sin() * self.attention_scaling
461
+
462
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
463
+
464
+
465
+ @auto_docstring
466
+ class Glm4MoeModel(Glm4MoePreTrainedModel):
467
+ _keys_to_ignore_on_load_unexpected = [r"model\.layers\.92.*", r"model\.layers\.46.*"]
468
+
469
+ def __init__(self, config: Glm4MoeConfig):
470
+ super().__init__(config)
471
+ self.padding_idx = config.pad_token_id
472
+ self.vocab_size = config.vocab_size
473
+
474
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
475
+ self.layers = nn.ModuleList(
476
+ [Glm4MoeDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
477
+ )
478
+ self.norm = Glm4MoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
479
+ self.rotary_emb = Glm4MoeRotaryEmbedding(config=config)
480
+ self.gradient_checkpointing = False
481
+
482
+ # Initialize weights and apply final processing
483
+ self.post_init()
484
+
485
+ @check_model_inputs
486
+ @auto_docstring
487
+ def forward(
488
+ self,
489
+ input_ids: Optional[torch.LongTensor] = None,
490
+ attention_mask: Optional[torch.Tensor] = None,
491
+ position_ids: Optional[torch.LongTensor] = None,
492
+ past_key_values: Optional[Cache] = None,
493
+ inputs_embeds: Optional[torch.FloatTensor] = None,
494
+ cache_position: Optional[torch.LongTensor] = None,
495
+ use_cache: Optional[bool] = None,
496
+ **kwargs: Unpack[TransformersKwargs],
497
+ ) -> BaseModelOutputWithPast:
498
+ if (input_ids is None) ^ (inputs_embeds is not None):
499
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
500
+
501
+ if inputs_embeds is None:
502
+ inputs_embeds: torch.Tensor = self.embed_tokens(input_ids)
503
+
504
+ if use_cache and past_key_values is None:
505
+ past_key_values = DynamicCache(config=self.config)
506
+
507
+ if cache_position is None:
508
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
509
+ cache_position: torch.Tensor = torch.arange(
510
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
511
+ )
512
+
513
+ if position_ids is None:
514
+ position_ids = cache_position.unsqueeze(0)
515
+
516
+ causal_mask = create_causal_mask(
517
+ config=self.config,
518
+ input_embeds=inputs_embeds,
519
+ attention_mask=attention_mask,
520
+ cache_position=cache_position,
521
+ past_key_values=past_key_values,
522
+ position_ids=position_ids,
523
+ )
524
+
525
+ hidden_states = inputs_embeds
526
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
527
+
528
+ for decoder_layer in self.layers[: self.config.num_hidden_layers]:
529
+ hidden_states = decoder_layer(
530
+ hidden_states,
531
+ attention_mask=causal_mask,
532
+ position_ids=position_ids,
533
+ past_key_values=past_key_values,
534
+ cache_position=cache_position,
535
+ position_embeddings=position_embeddings,
536
+ **kwargs,
537
+ )
538
+
539
+ hidden_states = self.norm(hidden_states)
540
+ return BaseModelOutputWithPast(
541
+ last_hidden_state=hidden_states,
542
+ past_key_values=past_key_values,
543
+ )
544
+
545
+
546
+ @auto_docstring
547
+ class Glm4MoeForCausalLM(Glm4MoePreTrainedModel, GenerationMixin):
548
+ _tied_weights_keys = ["lm_head.weight"]
549
+ _tp_plan = {"lm_head": "colwise_rep"}
550
+ _pp_plan = {"lm_head": (["hidden_states"], ["logits"])}
551
+
552
+ def __init__(self, config):
553
+ super().__init__(config)
554
+ self.model = Glm4MoeModel(config)
555
+ self.vocab_size = config.vocab_size
556
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
557
+
558
+ # Initialize weights and apply final processing
559
+ self.post_init()
560
+
561
+ @can_return_tuple
562
+ @auto_docstring
563
+ def forward(
564
+ self,
565
+ input_ids: Optional[torch.LongTensor] = None,
566
+ attention_mask: Optional[torch.Tensor] = None,
567
+ position_ids: Optional[torch.LongTensor] = None,
568
+ past_key_values: Optional[Cache] = None,
569
+ inputs_embeds: Optional[torch.FloatTensor] = None,
570
+ labels: Optional[torch.LongTensor] = None,
571
+ use_cache: Optional[bool] = None,
572
+ cache_position: Optional[torch.LongTensor] = None,
573
+ logits_to_keep: Union[int, torch.Tensor] = 0,
574
+ **kwargs: Unpack[TransformersKwargs],
575
+ ) -> CausalLMOutputWithPast:
576
+ r"""
577
+ Example:
578
+
579
+ ```python
580
+ >>> from transformers import AutoTokenizer, Glm4MoeForCausalLM
581
+
582
+ >>> model = Glm4MoeForCausalLM.from_pretrained("meta-glm4_moe/Glm4Moe-2-7b-hf")
583
+ >>> tokenizer = AutoTokenizer.from_pretrained("meta-glm4_moe/Glm4Moe-2-7b-hf")
584
+
585
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
586
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
587
+
588
+ >>> # Generate
589
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
590
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
591
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
592
+ ```"""
593
+ outputs: BaseModelOutputWithPast = self.model(
594
+ input_ids=input_ids,
595
+ attention_mask=attention_mask,
596
+ position_ids=position_ids,
597
+ past_key_values=past_key_values,
598
+ inputs_embeds=inputs_embeds,
599
+ use_cache=use_cache,
600
+ cache_position=cache_position,
601
+ **kwargs,
602
+ )
603
+
604
+ hidden_states = outputs.last_hidden_state
605
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
606
+ slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
607
+ logits = self.lm_head(hidden_states[:, slice_indices, :])
608
+
609
+ loss = None
610
+ if labels is not None:
611
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
612
+
613
+ return CausalLMOutputWithPast(
614
+ loss=loss,
615
+ logits=logits,
616
+ past_key_values=outputs.past_key_values,
617
+ hidden_states=outputs.hidden_states,
618
+ attentions=outputs.attentions,
619
+ )
620
+
621
+
622
+ __all__ = ["Glm4MoePreTrainedModel", "Glm4MoeModel", "Glm4MoeForCausalLM"]
623
+
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9340665016419c825c4bdabbcc9acc43b7ca2c68ce142724afa829abb1be5efd
3
+ size 19970699
tokenizer_config.json ADDED
@@ -0,0 +1,325 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "151329": {
4
+ "content": "<|endoftext|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "151330": {
12
+ "content": "[MASK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "151331": {
20
+ "content": "[gMASK]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "151332": {
28
+ "content": "[sMASK]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "151333": {
36
+ "content": "<sop>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "151334": {
44
+ "content": "<eop>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "151335": {
52
+ "content": "<|system|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "151336": {
60
+ "content": "<|user|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "151337": {
68
+ "content": "<|assistant|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "151338": {
76
+ "content": "<|observation|>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "151339": {
84
+ "content": "<|begin_of_image|>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "151340": {
92
+ "content": "<|end_of_image|>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "151341": {
100
+ "content": "<|begin_of_video|>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "151342": {
108
+ "content": "<|end_of_video|>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "151343": {
116
+ "content": "<|begin_of_audio|>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "151344": {
124
+ "content": "<|end_of_audio|>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "151345": {
132
+ "content": "<|begin_of_transcription|>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "151346": {
140
+ "content": "<|end_of_transcription|>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ },
147
+ "151347": {
148
+ "content": "<|code_prefix|>",
149
+ "lstrip": false,
150
+ "normalized": false,
151
+ "rstrip": false,
152
+ "single_word": false,
153
+ "special": true
154
+ },
155
+ "151348": {
156
+ "content": "<|code_middle|>",
157
+ "lstrip": false,
158
+ "normalized": false,
159
+ "rstrip": false,
160
+ "single_word": false,
161
+ "special": true
162
+ },
163
+ "151349": {
164
+ "content": "<|code_suffix|>",
165
+ "lstrip": false,
166
+ "normalized": false,
167
+ "rstrip": false,
168
+ "single_word": false,
169
+ "special": true
170
+ },
171
+ "151350": {
172
+ "content": "<think>",
173
+ "lstrip": false,
174
+ "normalized": false,
175
+ "rstrip": false,
176
+ "single_word": false,
177
+ "special": false
178
+ },
179
+ "151351": {
180
+ "content": "</think>",
181
+ "lstrip": false,
182
+ "normalized": false,
183
+ "rstrip": false,
184
+ "single_word": false,
185
+ "special": false
186
+ },
187
+ "151352": {
188
+ "content": "<tool_call>",
189
+ "lstrip": false,
190
+ "normalized": false,
191
+ "rstrip": false,
192
+ "single_word": false,
193
+ "special": false
194
+ },
195
+ "151353": {
196
+ "content": "</tool_call>",
197
+ "lstrip": false,
198
+ "normalized": false,
199
+ "rstrip": false,
200
+ "single_word": false,
201
+ "special": false
202
+ },
203
+ "151354": {
204
+ "content": "<tool_response>",
205
+ "lstrip": false,
206
+ "normalized": false,
207
+ "rstrip": false,
208
+ "single_word": false,
209
+ "special": false
210
+ },
211
+ "151355": {
212
+ "content": "</tool_response>",
213
+ "lstrip": false,
214
+ "normalized": false,
215
+ "rstrip": false,
216
+ "single_word": false,
217
+ "special": false
218
+ },
219
+ "151356": {
220
+ "content": "<arg_key>",
221
+ "lstrip": false,
222
+ "normalized": false,
223
+ "rstrip": false,
224
+ "single_word": false,
225
+ "special": false
226
+ },
227
+ "151357": {
228
+ "content": "</arg_key>",
229
+ "lstrip": false,
230
+ "normalized": false,
231
+ "rstrip": false,
232
+ "single_word": false,
233
+ "special": false
234
+ },
235
+ "151358": {
236
+ "content": "<arg_value>",
237
+ "lstrip": false,
238
+ "normalized": false,
239
+ "rstrip": false,
240
+ "single_word": false,
241
+ "special": false
242
+ },
243
+ "151359": {
244
+ "content": "</arg_value>",
245
+ "lstrip": false,
246
+ "normalized": false,
247
+ "rstrip": false,
248
+ "single_word": false,
249
+ "special": false
250
+ },
251
+ "151360": {
252
+ "content": "/nothink",
253
+ "lstrip": false,
254
+ "normalized": false,
255
+ "rstrip": false,
256
+ "single_word": false,
257
+ "special": true
258
+ },
259
+ "151361": {
260
+ "content": "<|begin_of_box|>",
261
+ "lstrip": false,
262
+ "normalized": false,
263
+ "rstrip": false,
264
+ "single_word": false,
265
+ "special": false
266
+ },
267
+ "151362": {
268
+ "content": "<|end_of_box|>",
269
+ "lstrip": false,
270
+ "normalized": false,
271
+ "rstrip": false,
272
+ "single_word": false,
273
+ "special": false
274
+ },
275
+ "151363": {
276
+ "content": "<|image|>",
277
+ "lstrip": false,
278
+ "normalized": false,
279
+ "rstrip": false,
280
+ "single_word": false,
281
+ "special": false
282
+ },
283
+ "151364": {
284
+ "content": "<|video|>",
285
+ "lstrip": false,
286
+ "normalized": false,
287
+ "rstrip": false,
288
+ "single_word": false,
289
+ "special": false
290
+ }
291
+ },
292
+ "additional_special_tokens": [
293
+ "<|endoftext|>",
294
+ "[MASK]",
295
+ "[gMASK]",
296
+ "[sMASK]",
297
+ "<sop>",
298
+ "<eop>",
299
+ "<|system|>",
300
+ "<|user|>",
301
+ "<|assistant|>",
302
+ "<|observation|>",
303
+ "<|begin_of_image|>",
304
+ "<|end_of_image|>",
305
+ "<|begin_of_video|>",
306
+ "<|end_of_video|>",
307
+ "<|begin_of_audio|>",
308
+ "<|end_of_audio|>",
309
+ "<|begin_of_transcription|>",
310
+ "<|end_of_transcription|>",
311
+ "<|code_prefix|>",
312
+ "<|code_middle|>",
313
+ "<|code_suffix|>",
314
+ "/nothink"
315
+ ],
316
+ "clean_up_tokenization_spaces": false,
317
+ "do_lower_case": false,
318
+ "eos_token": "<eop>",
319
+ "extra_special_tokens": {},
320
+ "model_max_length": 128000,
321
+ "pad_token": "<|endoftext|>",
322
+ "padding_side": "left",
323
+ "remove_space": false,
324
+ "tokenizer_class": "PreTrainedTokenizer"
325
+ }