Spatha_GLM_32B_V.2
This is a merge of pre-trained language models
GLM is interesting model for RP and general usage. Goal of this merge was to create somewhat stable model with good prose, capable to "darker" themes, with good understanding of character.
Actually, model is pretty good. it's slightly faster than 27b gemma, while remains consistent up to 16k context. Model is attentive to prompt, "smart" enough and overall generate good, uncensored replies. Sometimes it could over concentrate or fall in loops, but not too often. Also it works far better with good prompt and char cards.
Ru wasn't properly tested, on first glance it was not good.
I don't reccomend to use lower than q4_k_m, it seems to perform far worse.
Tested on some obscure GLM4 preset from net, Q4_K_M 300 replies, T1.04
- Downloads last month
- 16