Llama.cpp hybrid layer quantization of Llama 3.1 8B Instruct by meta-llama

Original model: https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~6G gguf with the same perplexity as a ~6.6G Q6_K GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:

   LAYER_TYPES='[
   [0 ,"Q6_K"  ],[1 ,"Q5_K_M"],[2 ,"Q4_K_M"],[3 ,"Q4_K_M"],[4 ,"Q4_K_M"],[5 ,"Q4_K_M"],[6 ,"Q4_K_M"],[7 ,"Q4_K_M"],
   [8 ,"Q5_K_M"],[9 ,"Q5_K_S"],[10,"Q5_K_M"],[11,"Q5_K_S"],[12,"Q5_K_M"],[13,"Q5_K_S"],[14,"Q5_K_M"],[15,"Q5_K_S"],
   [16,"Q5_K_M"],[17,"Q5_K_M"],[18,"Q5_K_M"],[19,"Q5_K_M"],[20,"Q5_K_M"],[21,"Q5_K_M"],[22,"Q5_K_M"],[23,"Q5_K_M"],
   [24,"Q5_K_M"],[25,"Q5_K_M"],[26,"Q6_K"  ],[27,"Q6_K"  ],[28,"Q6_K"  ],[29,"Q8_0"  ],[30,"Q8_0"  ],[31,"Q8_0"  ]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K"

Comparison:

Quant size PPL Comment
Q6_K 6.6e9 7.2 Q6_K with default embedding and output
Q6_K_H 6.0e9 7.2 Hybrid quant with Q6_K embedding Q6_K output

Usage:

This model may be used together with fixie-ai ultravox-v0_5-llama-3_1-8b to enable it to process audio (.mp3 and .wav files) and text inputs and generate text outputs. The mmproj file is made available here: https://huggingface.co/steampunque/ultravox-v0_5-llama-3_1-8b-Hybrid-GGUF More information about running multimedia may be found in the docs in the mtmd readme in the tools directory of the llama.cpp source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md.

Benchmarks:

A full set of benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Llama-3.1-8B-Instruct.Q6_K_H.gguf Q6_K_H 6e9 B 0.6B smaller than Q6_K
ultravox-v0_5-llama-3_1-8b.mmproj.gguf mmproj 1.38e9 B multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
4
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Llama-3.1-8B-Instruct-Hybrid-GGUF

Quantized
(455)
this model