stduhpf commited on
Commit
58686fb
·
verified ·
1 Parent(s): df161ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -7,19 +7,20 @@ base_model:
7
  - bartowski/google_gemma-3-1b-it-GGUF
8
  ---
9
 
10
- This is a "self" merge of https://huggingface.co/google/gemma-3-1b-it-qat-q4_0-gguf and https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF.
11
 
12
- The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take. Instead of quantizing the table myself, I extracted it from Bartowski's quantized models, because those were already calibrated with imatrix, which should squeeze some extra performance out of it.
 
 
13
 
14
  Here are some perplexity measurements:
15
 
16
  | Model | File size ↓ | PPL (wiki.text.raw) ↓ |
17
  | --- | --- | --- |
18
- | [This model](https://huggingface.co/stduhpf/google-gemma-3-1b-it-qat-q4_0-gguf-small/blob/main/gemma-3-1b-it-q4_0_s.gguf) | 720 MB | 28.2603 +/- 0.26947 |
 
19
  | [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-1b-it-Q4_0.gguf) | 722 MB | 34.4906 +/- 0.34539 |
20
  | [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-1b-it-qat-q4_0-gguf/blob/main/gemma-3-1b-it-q4_0.gguf) | 1 GB | 28.0400 +/- 0.26669 |
21
  | [BF16 (upscaled to f32 for faster inference)](https://huggingface.co/google/gemma-3-1b-it) | 2 GB | 29.1129 +/- 0.28170 |
22
 
23
- Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0 with imatrix, but this is a static quant.
24
- The perplexity scores are barely within margin of error between this model and the original QAT, it seems like the embedding table starts making a difference at this small size, though the trade off is probably still worth it.
25
- Interestingly perplexity seems sigificantly better for both QAT models compared to BF16.
 
7
  - bartowski/google_gemma-3-1b-it-GGUF
8
  ---
9
 
10
+ This is a "self" merge of https://huggingface.co/google/gemma-3-1b-it-qat-q4_0-gguff.
11
 
12
+ The official QAT weights released by google use fp16 (instead of Q6_K) for the embeddings table, which makes this model take a significant extra amount of memory (and storage) compared to what Q4_0 quants are supposed to take.
13
+ ~~Instead of quantizing the table myself, I extracted it from Bartowski's quantized models, because those were already calibrated with imatrix, which should squeeze some extra performance out of it.~~
14
+ Requantizing with llama.cpp fixes that and gives better result than the other thing.
15
 
16
  Here are some perplexity measurements:
17
 
18
  | Model | File size ↓ | PPL (wiki.text.raw) ↓ |
19
  | --- | --- | --- |
20
+ | [This model](https://huggingface.co/stduhpf/google-gemma-3-1b-it-qat-q4_0-gguf-small/blob/main/gemma-3-1b-it-q4_0_s.gguf) | 720 MB | 28.0468 +/- 0.26681 |
21
+ | [This model (older version)](https://huggingface.co/stduhpf/google-gemma-3-1b-it-qat-q4_0-gguf-small/blob/f325927302d106ad204c0b6a8f09f216a0447519/gemma-3-1b-it-q4_0_s.gguf) | 720 MB | 28.2603 +/- 0.26947 |
22
  | [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-1b-it-Q4_0.gguf) | 722 MB | 34.4906 +/- 0.34539 |
23
  | [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-1b-it-qat-q4_0-gguf/blob/main/gemma-3-1b-it-q4_0.gguf) | 1 GB | 28.0400 +/- 0.26669 |
24
  | [BF16 (upscaled to f32 for faster inference)](https://huggingface.co/google/gemma-3-1b-it) | 2 GB | 29.1129 +/- 0.28170 |
25
 
26
+ Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0 with imatrix, but this is a static quant.