bartowski commited on
Commit
814f8f2
·
verified ·
1 Parent(s): 524f434

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -32,8 +32,8 @@ Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or a
32
 
33
  | Filename | Quant type | File Size | Split | Description |
34
  | -------- | ---------- | --------- | ----- | ----------- |
35
- | [mmproj-gemma-3-27b-it-abliterated-f32.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-GGUF/blob/main/mmproj-mlabonne_gemma-3-27b-it-f32.gguf) | f32 | 1.69GB | false | F32 format MMPROJ file, required for vision. |
36
- | [mmproj-gemma-3-27b-it-abliterated-f16.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-GGUF/blob/main/mmproj-mlabonne_gemma-3-27b-it-f16.gguf) | f16 | 858MB | false | F16 format MMPROJ file, required for vision. |
37
  | [gemma-3-27b-it-abliterated-bf16.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/tree/main/mlabonne_gemma-3-27b-it-abliterated-bf16) | bf16 | 54.03GB | true | Full BF16 weights. |
38
  | [gemma-3-27b-it-abliterated-Q8_0.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-27b-it-abliterated-Q8_0.gguf) | Q8_0 | 28.71GB | false | Extremely high quality, generally unneeded but max available quant. |
39
  | [gemma-3-27b-it-abliterated-Q6_K_L.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-27b-it-abliterated-Q6_K_L.gguf) | Q6_K_L | 22.51GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
 
32
 
33
  | Filename | Quant type | File Size | Split | Description |
34
  | -------- | ---------- | --------- | ----- | ----------- |
35
+ | [mmproj-gemma-3-27b-it-abliterated-f32.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mmproj-mlabonne_gemma-3-27b-it-abliterated-f32.gguf) | f32 | 1.69GB | false | F32 format MMPROJ file, required for vision. |
36
+ | [mmproj-gemma-3-27b-it-abliterated-f16.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mmproj-mlabonne_gemma-3-27b-it-abliterated-f16.gguf) | f16 | 858MB | false | F16 format MMPROJ file, required for vision. |
37
  | [gemma-3-27b-it-abliterated-bf16.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/tree/main/mlabonne_gemma-3-27b-it-abliterated-bf16) | bf16 | 54.03GB | true | Full BF16 weights. |
38
  | [gemma-3-27b-it-abliterated-Q8_0.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-27b-it-abliterated-Q8_0.gguf) | Q8_0 | 28.71GB | false | Extremely high quality, generally unneeded but max available quant. |
39
  | [gemma-3-27b-it-abliterated-Q6_K_L.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-27b-it-abliterated-Q6_K_L.gguf) | Q6_K_L | 22.51GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |