Llama.cpp hybrid layer quantization of Qwen3-4B-Instruct-2507 by Qwen
Original model: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507
The hybrid quant employs different quantization levels on a per layer basis to increase flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simultaneously optimize quantized size and model performance. These quants were specifically optimized for the Qwen3 4B Instruct 2507 edge model for same file size as Q6_K quant with improved performance.
The layer quants are as follows:
Q5_K_L : Q5_K_M + attn_o = Q6_K
Q6_K_S : Q6_K
Q6_K_M : Q6_K_S + attn_v = Q8_0, ffn_d = Q8_0
Q6_K_L : Q6_K_M + attn_o = Q8_0
LAYER_TYPES='[
[0 ,"Q6_K_M"],[1 ,"Q6_K_S"],[2 ,"Q6_K_S"],[3 ,"Q6_K_S"],[4 ,"Q5_K_L"],[5 ,"Q5_K_M"],
[6 ,"Q5_K_M"],[7 ,"Q5_K_M"],[8, "Q5_K_M"],[9, "Q5_K_M"],[10,"Q5_K_M"],[11,"Q5_K_M"],
[12,"Q5_K_L"],[13,"Q5_K_L"],[14,"Q5_K_L"],[15,"Q5_K_L"],[16,"Q5_K_L"],[17,"Q5_K_L"],
[18,"Q6_K_S"],[19,"Q6_K_S"],[20,"Q6_K_S"],[21,"Q6_K_S"],[22,"Q6_K_S"],[23,"Q6_K_S"],
[24,"Q6_K_M"],[25,"Q6_K_M"],[26,"Q6_K_M"],[27,"Q6_K_M"],[28,"Q6_K_M"],[29,"Q6_K_M"],
[30,"Q6_K_L"],[31,"Q6_K_M"],[32,"Q6_K_L"],[33,"Q6_K_L"],[34,"Q6_K_L"],[35,"Q8_0" ]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
These quants were optimized for accurate reasoning performance across a small set of curated test prompts.
Comparison:
Quant | size | PPL | Comment |
---|---|---|---|
Q6_K | 3.3e9 | 10.6 | default embed and output |
Q6_K_H | 3.3e9 | 10.6 | improved reasoning over Q6_K |
Evals of the model are available at https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
Link | Type | Size/e9 B | Notes |
---|---|---|---|
Qwen3-4B-Instruct-2507.Q6_K_H.gguf | Q6_K_H | 3.3e9 B | Q6_K size with improved reasoning |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 28
6-bit
Model tree for steampunque/Qwen3-4B-Instruct-2507-Hybrid-GGUF
Base model
Qwen/Qwen3-4B-Instruct-2507