license: apache-2.0
library_name: transformers
language:
- en
- fr
- zh
- de
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- qwen3
- horror
- finetune
- merge
- not-for-all-audiences
- uncensored
- abliterated
- llama-cpp
- gguf-my-repo
base_model: DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated
pipeline_tag: text-generation
Triangle104/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated-Q6_K-GGUF
This model was converted to GGUF format from DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
A massive 22B, 62 layer merge of the fantastic "The-Omega-Directive-Qwen3-14B-v1.1" and off the scale "Goekdeniz-Guelmez/Josiefied-Qwen3-14B-abliterated-v3" in Qwen3, with full reasoning (can be turned on or off) and the model is completely uncensored/abliterated too.
4 example generations below, and detailed usage instructions.
Requires:
- Chatml or Jinja template (embeded, also see notes below)
- Temp range 0 to 5. (suggest .5 to 2.5)
- Rep pen range 1 to 1.1 (suggest 1.05)
- System prompt (optional) below.
- Context is 40k / 40000.
Suggested Settings:
- temp .4 to 2.5
- temp .2 to .8 for specific reasoning tasks / non creative tasks.
- rep pen 1.05
- top k: 100, topp .95, minp .05
- context of 8k at least.
- Other samplers/parameters as required.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Triangle104/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated-Q6_K-GGUF --hf-file qwen3-the-josiefied-omega-directive-22b-uncensored-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Triangle104/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated-Q6_K-GGUF --hf-file qwen3-the-josiefied-omega-directive-22b-uncensored-abliterated-q6_k.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Triangle104/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated-Q6_K-GGUF --hf-file qwen3-the-josiefied-omega-directive-22b-uncensored-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Triangle104/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated-Q6_K-GGUF --hf-file qwen3-the-josiefied-omega-directive-22b-uncensored-abliterated-q6_k.gguf -c 2048