steampunque's picture
Create README.md
aadc9cc verified
metadata
license: apache-2.0
base_model: Qwen/Qwen3-14B
base_model_relation: quantized
tags:
  - Qwen
  - Qwen3
  - GGUF
  - quantized
  - 4-bit

Llama.cpp hybrid layer quantization of Qwen3-14B by Alibaba

Original model: https://huggingface.co/Qwen/Qwen3-14B

The hybrid quant employs different quantization levels on a per layer basis to increased flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simultaneously optimize quantized size and model performance. This quant was designed to approximately match IQ4_XS size and performance while using all K-quants for faster CPU processing when partially offloaded. Partial evals for the model are given here: https://huggingface.co/spaces/steampunque/benchlm. This model can run fully offloaded on a 12G VRAM GPU. Q8 KV cache is recommended to be used with it. For this file the layer quants are as follows:

LAYER_TYPES='[
   [0 ,"Q3_K_M"],[1 ,"Q3_K_M"],[2 ,"Q3_K_M"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
   [8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
   [16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
   [24,"Q3_K_L"],[25,"Q3_K_M"],[26,"Q3_K_L"],[27,"Q3_K_M"],[28,"Q3_K_L"],[29,"Q3_K_M"],[30,"Q3_K_L"],[31,"Q3_K_M"],
   [32,"Q3_K_L"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_M"],[37,"Q5_K_S"],[38,"Q5_K_M"],[39,"Q6_K"]
   ]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K"

These quants were optimized for high reasoning performance.

Comparison:

Quant size PPL Comment
IQ4_XS 8.18e9 8.81 default embed and output
Q4_K_H 7.96e9 8.86 Q4_K embed Q6_K output

Download the file from below:

Link Type Size/e9 B Notes
Qwen3-14B.Q4_K_H.gguf Q4_K_H 7.96e9 B ~IQ4_XS size

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040