Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,16 @@ tags:
|
|
12 |
This model was converted to GGUF format from [`allura-org/Gemma-3-Glitter-27B`](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
13 |
Refer to the [original model card](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) for more details on the model.
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
## Use with llama.cpp
|
16 |
Install llama.cpp through brew (works on Mac and Linux)
|
17 |
|
|
|
12 |
This model was converted to GGUF format from [`allura-org/Gemma-3-Glitter-27B`](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
13 |
Refer to the [original model card](https://huggingface.co/allura-org/Gemma-3-Glitter-27B) for more details on the model.
|
14 |
|
15 |
+
---
|
16 |
+
A creative writing model based on Gemma 3 27B.
|
17 |
+
|
18 |
+
Columbidae/gemma-3-27b-half, a 50/50 merge of 27B IT and 27B PT, was used as the base model. (This was done because of the success of Starshine, a 50/50 IT and PT merge.)
|
19 |
+
|
20 |
+
The inclusion of PT model does weaken the instruct, but it also weakens the censorship/hesitancy to participate in certain fictional stories. The prose also becomes more natural with less of the IT model included.
|
21 |
+
|
22 |
+
This model does better with short and to-the-point prompts. Long, detailed system prompts will often confuse it. (Tested with 1000-2000 token system prompts to lackluster results compared to 100-500 token prompts).
|
23 |
+
|
24 |
+
---
|
25 |
## Use with llama.cpp
|
26 |
Install llama.cpp through brew (works on Mac and Linux)
|
27 |
|