Llama.cpp hybrid layer quantization of Qwen2.5-VL-3B-Instruct by Alibaba

Original model: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~2.8G gguf ~same perplexity as a ~3.3G Q8_0 GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:

   LAYER_TYPES='[
   [0 ,"Q8_0"  ],[1 ,"Q5_K_M"],[2 ,"Q5_K_M"],[3 ,"Q5_K_M"],[4 ,"Q5_K_M"],[5 ,"Q5_K_M"],
   [6 ,"Q5_K_M"],[7 ,"Q5_K_M"],[8, "Q5_K_M"],[9, "Q5_K_M"],[10,"Q5_K_M"],[11,"Q5_K_M"],
   [12,"Q6_K"  ],[13,"Q6_K"  ],[14,"Q6_K"  ],[15,"Q6_K"  ],[16,"Q6_K"  ],[17,"Q6_K"  ],
   [18,"Q6_K"  ],[19,"Q6_K"  ],[20,"Q6_K"  ],[21,"Q6_K"  ],[22,"Q6_K"  ],[23,"Q6_K"  ],
   [24,"Q8_0"  ],[25,"Q8_0"  ],[26,"Q8_0"  ],[27,"Q8_0"  ],[28,"Q8_0"  ],[29,"Q8_0"  ],
   [30,"Q8_0"  ],[31,"Q8_0"  ],[32,"Q8_0"  ],[33,"Q8_0"  ],[34,"Q8_0"  ],[35,"Q8_0"  ]
   ]'
   FLAGS="--token-embedding-type Q8_0 --output-tensor-type Q6_K"

Comparison:

Quant size PPL Comment
Q8_0 3.3e9 11.6 Q8_0 with default embedding and output
Q8_0_H 2.8e9 11.3 Hybrid quant with Q8_0 embedding Q6_K output

Usage:

Qwen2.5-VL-3B-Instruct is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .

Benchmarks:

A full set of benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Qwen2.5-VL-3B-Instruct.Q8_0_H.gguf Q8_0_H 2.8e9 B 0.5B smaller than Q8_0
Qwen2.5-VL-3B-Instruct.mmproj.gguf mmproj 1.34e9 B multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
26
GGUF
Model size
669M params
Architecture
clip
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Qwen2.5-VL-3B-Instruct-Hybrid-GGUF

Quantized
(41)
this model