Trained on openllama-3b-v2 with alpaca-lora-4bit. About 50-50 on reverse proxy logs and a modified version of the orca-best dataset. Multi turn with 8k context.

It works best with SillyTavern character cards.

Chat format is like so:

Optional System Prompt

MASTER:

DEMON:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support