Opinion

#1
by Szarka - opened

The base models are considered top tier in the RP community, however since they are open source and the funding behind them was nonexistent they are barely finetuned for this reason.

Whatever you do, however you merge them the only thing that can substantially increase creativity is finetuning a base model on vast amount of RP data for a lot of money.

Tarek's Lab org

Aye, finetuning isn't an option for me at the moment. There is not much else to try but tweak things to get them 'more optimal' to what I have in mind. While 3.3 is by far the smarter of the Progenitor series, the vocab was really lacking and tame. I did notice however that the tokenizer change did make a big difference to the range of words coming out of the model, but still lost some coherence at higher temps.

By the way good work on them. My fav was progenitor 3.1. I'll give 3.4 a try today

Tarek's Lab org
edited 11 days ago

Thanks so much! Hope you like it.

Sign up or log in to comment