stduhpf commited on
Commit
0251e9e
·
verified ·
1 Parent(s): a028b01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -3
README.md CHANGED
@@ -1,3 +1,23 @@
1
- ---
2
- license: gemma
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ metrics:
4
+ - perplexity
5
+ base_model:
6
+ - google/gemma-3-1b-it-qat-q4_0-gguf
7
+ - bartowski/google_gemma-3-1b-it-GGUF
8
+ ---
9
+
10
+ This is a "self" merge of https://huggingface.co/google/gemma-3-1b-it-qat-q4_0-gguf and https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF.
11
+
12
+ The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. Instead of quantizing the table myself, I extracted it from Bartowski's quantized models, because those were already calibrated with imatrix, which should squeeze some extra performance out of it.
13
+
14
+ Here are some perplexity measurements:
15
+
16
+ | Model | File size ↓ | PPL (wiki.text.raw) ↓ |
17
+ | --- | --- | --- |
18
+ | [This model](https://huggingface.co/stduhpf/google-gemma-3-1b-it-qat-q4_0-gguf-small/blob/main/gemma-3-1b-it-q4_0_s.gguf) | 720 MB | 28.2603 +/- 0.26947 |
19
+ | [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-1b-it-Q4_0.gguf) | 722 MB | 34.4906 +/- 0.34539 |
20
+ | [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-1b-it-qat-q4_0-gguf/blob/main/gemma-3-1b-it-q4_0.gguf) | 1 GB | 28.2603 +/- 0.26947 |
21
+
22
+ Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0, but Google decided to use only Q4_0 instead, which is slightly smaller.
23
+ The perplexity scores are barely within margin of error between this model and the original QAT, it seems like the embedding table starts making a difference at this small size, though the trade off is probably still worth it..