Introduction

Megrez2-3x7B-A3B is a device native large language model. Megrez2 takes advantages of both the accuracy of Mixture-of-Experts (MoE) architecture and the compact size of Dense models. This release model was trained on 8T Tokens of data. In the future, we plan to improve the model's reasoning and agent capabilities.

Model Card

Architecture Mixture-of-Experts (MoE)
Total Parameters 3x7B
Activated Parameters 3B
Experts Shared Frequency 3
Number of Layers (Dense layer included) 31
Number of Dense Layers 1
Attention Hidden Dimension 2048
MoE Hidden Dimension (per Expert) 1408
Number of Attention Heads 16
Number of Experts 64
Selected Experts per Token 6
Number of Shared Experts 4
Vocabulary Size 128,880
Context Length 32K
Base Frequency of RoPE 5,000,000
Attention Mechanism GQA
Activation Function SwiGLU

Performance

We evaluated Megrez2-3x7B-A3B using the open-source evaluation tool OpenCompass on several important benchmarks. Some of the evaluation results are shown in the table below.

Benchmark Metric Megrez2-3x7B
-A3B
Megrez2-3x7B
-A3B-Preview
SmallThinker-21B
-A3B-Instruct
Qwen3-30B-A3B Qwen3-8B Qwen3-4B
-Instruct-2507
Phi4-14B
(nothink)
Gemma3-12B
Activate Params (B) 3.0 3.0 3.0 3.3 8.2 4.0 14.7 12.2
Stored Params (B) 7.5 7.5 21.5 30.5 8.2 4.0 14.7 12.2
MMLU EM 85.4 87.5 84.4 85.1 81.8 - 84.6 78.5
GPQA EM 58.8 28.8 55.0 44.4 38.9 62 55.5 34.9
IFEval Inst
loose
87.7 80.2 85.8 84.3 83.9 83.4 63.2 74.7
MATH-500 EM 87.2 81.6 82.4 84.4 81.6 - 80.2 82.4

How to Run

Transformers

The latest version of transformers is recommended or transformers>=4.52.4 is required. The following contains a code snippet illustrating how to use the model generate content based on given inputs.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

path = "Infinigence/Megrez2-3x7B-A3B"
device = "cuda"

tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True)

messages = [
    {"role": "user", "content": "世界上最高的山峰是哪座?"},
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(device)

model_outputs = model.generate(
    model_inputs,
    do_sample=True,
    max_new_tokens=1024
)

output_token_ids = [
    model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs))
]

responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0]
print(responses)

# 世界上最高的山峰是珠穆朗玛峰(Mount Everest),位于喜马拉雅山脉的中尼边境。珠穆朗玛峰的海拔高度为8,848.86米(29,031.7英尺),这一数据是由中国和尼泊尔在2020年共同宣布的最新测量结果。珠穆朗玛峰不仅是登山爱好者的圣地,也是地理和科学研究的重要对象。

ModelScope

ModelScope adopts Python API similar to (though not entirely identical to) Transformers. For basic usage, simply modify the first line of the above code as follows:

from modelscope import AutoModelForCausalLM, AutoTokenizer

llama.cpp

llama.cpp enables LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware. Now supported, please refer to the support-megrez branch for details.

How to Deploy

Megrez2-3x7B-A3B support using vLLM and SGLang as inference backends. For more information, please visit the gitHub repository.

Best Practice

To achieve optimal performance, we recommend the following settings:

  1. Sampling Parameters: we suggest using Temperature=0.7 and TopP=0.9 .

  2. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.

    • Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
    • Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the answer field with only the choice letter, e.g., "answer": "C"."

License Agreement

All our open-weight models are licensed under Apache 2.0.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{li2025megrez2technicalreport,
      title={Megrez2 Technical Report}, 
      author={Boxun Li and Yadong Li and Zhiyuan Li and Congyi Liu and Weilin Liu and Guowei Niu and Zheyue Tan and Haiyang Xu and Zhuyu Yao and Tao Yuan and Dong Zhou and Yueqing Zhuang and Bo Zhao and Guohao Dai and Yu Wang},
      year={2025},
      eprint={2507.17728},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.17728}, 
}

Contact

If you have any questions, please feel free to submit a GitHub issue or contact WeChat groups.

Downloads last month
201
Safetensors
Model size
7.47B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Infinigence/Megrez2-3x7B-A3B