Text Generation
Transformers
GGUF
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prose
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
qwen3
horror
finetune
Merge
Not-For-All-Audiences
uncensored
abliterated
llama-cpp
gguf-my-repo
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -45,6 +45,29 @@ pipeline_tag: text-generation
|
|
45 |
This model was converted to GGUF format from [`DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated`](https://huggingface.co/DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
46 |
Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated) for more details on the model.
|
47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
## Use with llama.cpp
|
49 |
Install llama.cpp through brew (works on Mac and Linux)
|
50 |
|
|
|
45 |
This model was converted to GGUF format from [`DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated`](https://huggingface.co/DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
46 |
Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated) for more details on the model.
|
47 |
|
48 |
+
---
|
49 |
+
A massive 22B, 62 layer merge of the fantastic "The-Omega-Directive-Qwen3-14B-v1.1" and off the scale "Goekdeniz-Guelmez/Josiefied-Qwen3-14B-abliterated-v3" in Qwen3, with full reasoning (can be turned on or off) and the model is completely uncensored/abliterated too.
|
50 |
+
|
51 |
+
4 example generations below, and detailed usage instructions.
|
52 |
+
|
53 |
+
Requires:
|
54 |
+
-
|
55 |
+
- Chatml or Jinja template (embeded, also see notes below)
|
56 |
+
- Temp range 0 to 5. (suggest .5 to 2.5)
|
57 |
+
- Rep pen range 1 to 1.1 (suggest 1.05)
|
58 |
+
- System prompt (optional) below.
|
59 |
+
- Context is 40k / 40000.
|
60 |
+
|
61 |
+
Suggested Settings:
|
62 |
+
-
|
63 |
+
- temp .4 to 2.5
|
64 |
+
- temp .2 to .8 for specific reasoning tasks / non creative tasks.
|
65 |
+
- rep pen 1.05
|
66 |
+
- top k: 100, topp .95, minp .05
|
67 |
+
- context of 8k at least.
|
68 |
+
- Other samplers/parameters as required.
|
69 |
+
|
70 |
+
---
|
71 |
## Use with llama.cpp
|
72 |
Install llama.cpp through brew (works on Mac and Linux)
|
73 |
|