7B Role-Playing Merges
Collection
All of my 7B merges go here. Safetensors format.
•
9 items
•
Updated
Re-upload with l3utterfly/mistral-7b-v0.2-layla-v4 as the base model of the merge.
This model is intended for fictional story-telling and role-playing.
The idea behind merging these models boils down to less boring responses, and less refusal.
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: l3utterfly/mistral-7b-v0.2-layla-v4
layer_range: [0, 32]
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [0, 32]
merge_method: slerp
base_model: l3utterfly/mistral-7b-v0.2-layla-v4
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16