|
--- |
|
license: gemma |
|
metrics: |
|
- perplexity |
|
base_model: |
|
- google/gemma-3-4b-it-qat-q4_0-gguf |
|
--- |
|
|
|
This is a requantized version of https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf. |
|
|
|
The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. |
|
~~Instead of quantizing the table myself, I extracted it from Bartowski's quantized models, because those were already calibrated with imatrix, which should squeeze some extra performance out of it.~~ |
|
Requantizing with llama.cpp fixes that and gives better result than the other thing. |
|
|
|
Here are some benchmark results: |
|
|
|
| Model | File size ↓ | PPL (wiki.text.raw) ↓ | Hellaswag (first 4000 tasks, deterministic) ↑ | |
|
| --- | --- | --- | --- | |
|
| [This model](https://huggingface.co/stduhpf/google-gemma-3-4b-it-qat-q4_0-gguf-small/blob/main/gemma-3-4b-it-q4_0_s.gguf) | 2.36 GB | 14.5758 +/- 0.13385 | 66.025% | |
|
| [This model (older version)](https://huggingface.co/stduhpf/google-gemma-3-4b-it-qat-q4_0-gguf-small/blob/359d9494e5e9276e5c4aec2a9e0bdebd74310b1a/gemma-3-4b-it-q4_0_s.gguf) | 2.36 GB | 14.5943 +/- 0.13405 | 65.675% | |
|
| [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-4b-it-Q4_0.gguf) | 2.37 GB | 16.8002 +/- 0.16519 | 65.65% | |
|
| [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf/blob/main/gemma-3-4b-it-q4_0.gguf) | 3.16 GB | 14.5796 +/- 0.13395 | 66.075% | |
|
|
|
(*Hellaswag scores here are not representative of real score since the questions were not randomized) |
|
|
|
Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0 with imatrix, but this is a static quant. |
|
The perplexity scores are within margin of error between this model and the original QAT, despite the size difference. |
|
|
|
The drop in Hellaswag score with the older version of the model is what made me realize there was probably something missing with my previous approach. It's much better now. |
|
|
|
I also fixed the control token metadata, which was slightly degrading the performance of the model in instruct mode. Shoutout to ngxson for [finding the issue](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf/discussions/3#67f6a2e0207b4bceea793151), |
|
tdh111 for [making me aware of the issue](https://huggingface.co/stduhpf/google-gemma-3-27b-it-qat-q4_0-gguf-small/discussions/3#67f74fdf8411d4d6a82049db), |
|
and u/dampflokfreund on reddit ([Dampfinchen](https://huggingface.co/Dampfinchen) on Huggingface) for [sharing the steps to fix it](https://www.reddit.com/r/LocalLLaMA/comments/1jvi860/comment/mmcuvw2). |