Safetensors
GGUF

These are based models trained on Civitai top uncensored prompts that is used by telegram bot @goonspromptbot by goonsai.com Video generation

It is a combination of an abliterated model which makes it totally uncensored and then trained on the specific task to write image or video generation prompts.

Help

This is a Completion style model. Meaning you need provide a few words of what you need and the model will try to complete with as much information as possible. Depending on the interface you use, you could improve it with a template though not required.

Example

a woman dancing in swimsuit then you will get the rest of the prompt.

quality, high resolution, elegant pose, shimmering lighting, beach setting, tropical breeze, smooth motion, round belly, long legs, head tilted slightly to one side, eyes closed gently, smile on face, water droplets on skin
  • You can roll the dice on output or you can follow up to refine it. Donโ€™t be worried about strange formatting as long as the keywords are there. The models dont care about grammer, its about tokens.
  • You can always edit the template for this Modefile and you can join the discord Discord Channel

Limitations

There are situations when the model will spit out trigger w0rds and LoRA names of what could be deleted LoRAs. Note that the default model already includes a simple system message so if you dont provide one, it will still work. The existing template is included in modelfiles so you can adapt the template for image generation or video.

Models are updated regularly with more training data

Issues

  • Qwen3 model is experimental and for development only. Likely you do not need it.
  • The default template assumes you are generating video. You have to create your own system prompt if you want to generate images.

FAQ

  • Can it generate prompt from images / videos, i.e. video/image -> text ? No. Until I train a vision model that is not possible.
  • It does not work with XYZ interface. I have added GGUF files in the model folder. I am not familiar with all the tools since I run this directly with Python code or Ollama for testing. I use OpenWebUI only because it is somewhat helpful in development and testing. Not really a resounding endorsement.

Support

r/goonsai

Models / Ollama

  • NSFW-small 10K model for older GPUs, might even work without GPU or a laptop one.
  • NSFW-Large 100K model for 8GB VRAM
Downloads last month
322
GGUF
Model size
1.54B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support