What Text Completion presets do you use?

#3
by AkiEvans - opened

I like the way your System Prompt works, but I noticed that it is heavily influenced by the sampler settings. Which ones do you use?

It kinda depends on the model you are using, to be fair

I made this system prompt using Mistral Small, and with it, temperature around 0.35 is stable, 0.7 is more creative
So I use it at 0.65 or with dynamic temp between the two. But each model has its ideal temperature settings

Now, for the rest of the settings, I am not really into this crazy sampling settings stuff
Fixing models by messing with the samplers is a waste of time imo. I don't like to fight with the model, if you need to, the model is borked imo
This person did a guide that is really similar on how I set up mine https://rentry.co/samplersettings

It's pretty minimal, I use some minP to cut trash tokens, generally at 0.02 unless the model needs some other value
and DRY to combat repetition, generally at 0.8/1.75, allowed length at 4 normally, turned up to 2 if I start to notice the model falling into repetition
The only other sampler I use is XTC that I turn on when the RP is already deep into repetition to try to steer it out of it, the guide explains how you use it
But I don't like to leave it turned on all the time because it makes the model worse at following prompts, since you are stopping it to write the most probable words

That's it for basically every model: The temp it wants to be creative, minP and DRY.

Edit: Oh, and my banned tokens list, of course.

Thanks for the answer, and what response length do you use? My biggest problem with Cydonia (the same MistralSmall) is message breaks, and "continue" takes the message too far.

Small AIs will always try to match the previous response length. This is a problem that you can use for your advantage.
Write the greeting message on the length you want the AIs turn to be, and just delete parts of the latest ones to get it to the right length so it starts to replicate it.
That's how you control them, limiting the size doesn't make the AI write less, you just cut them short.

I am a slow burn player, I like small responses with small developments, that let me constantly act too. Plus, it helps the AI to avoid acting for you, or write endlessly about things inner feelings or things that don't matter, since it can keep it short. But I keep the limit high because I don't want to miss anything either. It's at 896 right now, but it rarely reaches that.

Ok, thanks.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment