steampunque's picture
Update README.md
d7a0f37 verified
metadata
license: apache-2.0
base_model: Qwen/Qwen3-30B-A3B
base_model_relation: quantized
tags:
  - Qwen
  - Qwen3
  - GGUF
  - quantized
  - 4-bit

Llama.cpp hybrid layer quantization of Qwen3-30B-A3B by Alibaba

Original model: https://huggingface.co/Qwen/Qwen3-30B-A3B

The hybrid quant employs different quantization levels on a per layer basis to increased flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simulultaneously optimize quantized size and model performance. This quant was designed to match IQ4_XS size and perform better than IQ4_XS while using all K-quants for faster CPU processing. For this file the layer quants are as follows:

LAYER_TYPES='[
   [0 ,"Q3_K_M"],[1 ,"Q3_K_M"],[2 ,"Q3_K_M"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
   [8 ,"Q3_K_L"],[9 ,"Q3_K_M"],[10,"Q3_K_L"],[11,"Q3_K_M"],[12,"Q3_K_L"],[13,"Q3_K_M"],[14,"Q3_K_L"],[15,"Q3_K_M"],
   [16,"Q3_K_L"],[17,"Q3_K_L"],[18,"Q3_K_L"],[19,"Q3_K_L"],[20,"Q3_K_L"],[21,"Q3_K_L"],[22,"Q3_K_L"],[23,"Q3_K_L"],
   [24,"Q4_K_S"],[25,"Q3_K_L"],[26,"Q4_K_S"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
   [32,"Q4_K_S"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_S"],[37,"Q4_K_S"],[38,"Q4_K_S"],[39,"Q4_K_S"],
   [40,"Q4_K_M"],[41,"Q4_K_M"],[42,"Q4_K_M"],[43,"Q4_K_M"],[44,"Q4_K_M"],[45,"Q4_K_M"],[46,"Q4_K_M"],[47,"Q4_K_M"]
   ]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K"

These quants were optimized for high reasoning performance.

Comparison:

Quant size PPL Comment
IQ4_XS 16.6e9 9.15 default embed and output
Q4_K_H 16.6e9 9.10 Q4_K embed Q6_K output

Usage:

This moe model can be efficiently run by offloading expert tensors to CPU via -ot exps=CPU to open up very large context space. The smaller size of the optimally quantized parameters will give an effective boost in CPU processing speed due to reducing the memory BW needed to repeatedly copy them from main memory to SIMD regs. It can also run fully offloaded on GPU via RPC or high VRAM GPU.

Benchmarks:

Partial evals for the model are given here: https://huggingface.co/spaces/steampunque/benchlm.

Download the file from below:

Link Type Size/e9 B Notes
Qwen3-30B-A3B.Q4_K_H.gguf Q4_K_H 16.6e9 B ~IQ4_XS size

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040