Trained on openllama-3b-v2 with alpaca-lora-4bit. About 50-50 on reverse proxy logs and a modified version of the orca-best dataset. Multi turn with 8k context.
It works best with SillyTavern character cards.
Chat format is like so:
Optional System Prompt
MASTER:
DEMON:
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support