Circuitry_24B_V.3

This is a merge of pre-trained language models.

Lately i was experimenting with models, trying to fight massive problem of Circuitry v.2 - self censorship. Maybe my sysprompt is not good enough, maybe Q4_k_s is too small, i don't know. Idea of small update to v2 was born.

However, I messed up with config and made this. And somehow it performs better than circuitry v2.

But to the model itself.

Model works great in rp, erp and assistant. It produces better dialogues than v.2, with tendency to longer messages and narration.

Model follows instructions, consistent in markdown style, with enough attention to details. It can "remember" something what lies at beginning of 12k context and even isn't too broken with q8 context quantization.

Cliches are present - shivers running, heads spinning, though banned strings in ST fixes that.

Writing style is nice, without scenario bias. It can operate in a grimdark setting as good as in a utopian paradise card.

Model performs better on good cards, with dialogue and style example, but also can work with half written garbage.

It easily handles two characters in scene, and remains stable up to five, but replies length will inflate dramatically.

At censorship front, there is some improvements. V.2 did not throws refusals, yes, but it avoided explicit language until prompted directly. This one not shies in swearing and nsfw, but remains adequate.

Ru is tested on assistant, it was good. Ru rp was not tested.

Tested on MistralV7-tekken T0.8 (1.01 sometimes) XTC 0.1 0.1

Downloads last month
44
Safetensors
Model size
24B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for OddTheGreat/Circuitry_24B_V.3

Collection including OddTheGreat/Circuitry_24B_V.3