Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ Here are some perplexity measurements:
|
|
18 |
| [This model](https://huggingface.co/stduhpf/google-gemma-3-1b-it-qat-q4_0-gguf-small/blob/main/gemma-3-1b-it-q4_0_s.gguf) | 720 MB | 28.2603 +/- 0.26947 |
|
19 |
| [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-1b-it-Q4_0.gguf) | 722 MB | 34.4906 +/- 0.34539 |
|
20 |
| [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-1b-it-qat-q4_0-gguf/blob/main/gemma-3-1b-it-q4_0.gguf) | 1 GB | 28.0400 +/- 0.26669 |
|
21 |
-
| [BF16](https://huggingface.co/google/gemma-3-1b-it) | 2 GB | 29.1129 +/- 0.28170 |
|
22 |
|
23 |
Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0, but Google decided to use only Q4_0 instead, which is slightly smaller.
|
24 |
The perplexity scores are barely within margin of error between this model and the original QAT, it seems like the embedding table starts making a difference at this small size, though the trade off is probably still worth it.
|
|
|
18 |
| [This model](https://huggingface.co/stduhpf/google-gemma-3-1b-it-qat-q4_0-gguf-small/blob/main/gemma-3-1b-it-q4_0_s.gguf) | 720 MB | 28.2603 +/- 0.26947 |
|
19 |
| [Q4_0 (bartowski)](https://huggingface.co/bartowski/google_gemma-3-1b-it-GGUF/blob/main/google_gemma-3-1b-it-Q4_0.gguf) | 722 MB | 34.4906 +/- 0.34539 |
|
20 |
| [QAT Q4_0 (google)](https://huggingface.co/google/gemma-3-1b-it-qat-q4_0-gguf/blob/main/gemma-3-1b-it-q4_0.gguf) | 1 GB | 28.0400 +/- 0.26669 |
|
21 |
+
| [BF16 (upscaled to f32 for faster inference)](https://huggingface.co/google/gemma-3-1b-it) | 2 GB | 29.1129 +/- 0.28170 |
|
22 |
|
23 |
Note that this model ends up smaller than the Q4_0 from Bartowski. This is because llama.cpp sets some tensors to Q4_1 when quantizing models to Q4_0, but Google decided to use only Q4_0 instead, which is slightly smaller.
|
24 |
The perplexity scores are barely within margin of error between this model and the original QAT, it seems like the embedding table starts making a difference at this small size, though the trade off is probably still worth it.
|