bartowski commited on
Commit
524f434
·
verified ·
1 Parent(s): 500ec14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  quantized_by: bartowski
3
- pipeline_tag: text-generation
4
  license: gemma
5
  base_model: mlabonne/gemma-3-27b-it-abliterated
6
  ---
@@ -25,14 +25,15 @@ Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or a
25
 
26
  {prompt}<end_of_turn>
27
  <start_of_turn>model
28
- <end_of_turn>
29
- <start_of_turn>model
30
  ```
31
 
32
  ## Download a file (not the whole branch) from below:
33
 
34
  | Filename | Quant type | File Size | Split | Description |
35
  | -------- | ---------- | --------- | ----- | ----------- |
 
 
36
  | [gemma-3-27b-it-abliterated-bf16.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/tree/main/mlabonne_gemma-3-27b-it-abliterated-bf16) | bf16 | 54.03GB | true | Full BF16 weights. |
37
  | [gemma-3-27b-it-abliterated-Q8_0.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-27b-it-abliterated-Q8_0.gguf) | Q8_0 | 28.71GB | false | Extremely high quality, generally unneeded but max available quant. |
38
  | [gemma-3-27b-it-abliterated-Q6_K_L.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-27b-it-abliterated-Q6_K_L.gguf) | Q6_K_L | 22.51GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
@@ -173,4 +174,4 @@ Thank you ZeroWw for the inspiration to experiment with embed/output.
173
 
174
  Thank you to LM Studio for sponsoring my work.
175
 
176
- Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
1
  ---
2
  quantized_by: bartowski
3
+ pipeline_tag: image-text-to-text
4
  license: gemma
5
  base_model: mlabonne/gemma-3-27b-it-abliterated
6
  ---
 
25
 
26
  {prompt}<end_of_turn>
27
  <start_of_turn>model
28
+
 
29
  ```
30
 
31
  ## Download a file (not the whole branch) from below:
32
 
33
  | Filename | Quant type | File Size | Split | Description |
34
  | -------- | ---------- | --------- | ----- | ----------- |
35
+ | [mmproj-gemma-3-27b-it-abliterated-f32.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-GGUF/blob/main/mmproj-mlabonne_gemma-3-27b-it-f32.gguf) | f32 | 1.69GB | false | F32 format MMPROJ file, required for vision. |
36
+ | [mmproj-gemma-3-27b-it-abliterated-f16.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-GGUF/blob/main/mmproj-mlabonne_gemma-3-27b-it-f16.gguf) | f16 | 858MB | false | F16 format MMPROJ file, required for vision. |
37
  | [gemma-3-27b-it-abliterated-bf16.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/tree/main/mlabonne_gemma-3-27b-it-abliterated-bf16) | bf16 | 54.03GB | true | Full BF16 weights. |
38
  | [gemma-3-27b-it-abliterated-Q8_0.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-27b-it-abliterated-Q8_0.gguf) | Q8_0 | 28.71GB | false | Extremely high quality, generally unneeded but max available quant. |
39
  | [gemma-3-27b-it-abliterated-Q6_K_L.gguf](https://huggingface.co/bartowski/mlabonne_gemma-3-27b-it-abliterated-GGUF/blob/main/mlabonne_gemma-3-27b-it-abliterated-Q6_K_L.gguf) | Q6_K_L | 22.51GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
 
174
 
175
  Thank you to LM Studio for sponsoring my work.
176
 
177
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski