--- license: gemma metrics: - perplexity base_model: - google/gemma-3-4b-it-qat-q4_0-gguf - bartowski/google_gemma-3-4b-it-GGUF --- This is a "self" merge of https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf and https://huggingface.co/bartowski/google_gemma-3-4b-it-GGUF. The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. Instead of quantizing the table myself, I extracted it from Bartowski's quantized models, because those were already calibrated with imatrix, which should squeeze some extra performance out of it. Here are some perplexity measurements: | Model | File size ↓ | PPL (wiki.text.raw) ↓ | | --- | --- | --- | | [This model](https://huggingface.co/stduhpf/google-gemma-3-4b-it-qat-q4_0-gguf-small/blob/main/gemma-3-4b-it-q4_0_s.gguf) | 2.36 GB | 14.5943 +/- 0.13405 | | [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-4b-it-Q4_0.gguf) | 2.37 GB | 16.8002 +/- 0.16519 | | [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf/blob/main/gemma-3-4b-it-q4_0.gguf) | 3.16 GB | 14.5796 +/- 0.13395 | Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0, but Google decided to use only Q4_0 instead, which is slightly smaller. The perplexity scores are within margin of error between this model and the original QAT, despite the size difference.