DO NOT USE
Totally borked for my first merge, I'll research on whats wrong. Feel free to give some insight!
I totally did test the model first this time, hopefully it goes well here: https://huggingface.co/yvvki/Erotophobia-24B-v1.1
Erotophobia-24B-v1.0
My first merge and model ever! Literally depraved.
Heavily inspired by FlareRebellion/DarkHazard-v1.3-24b.
Quants
Thanks for mradermacer for providing the static GGUF quants here:
https://huggingface.co/mradermacher/Erotophobia-24B-v1.0-GGUF
Merge Details
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the DARE TIES merge method using cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition as a base.
Models Merged
The following models were included in the merge:
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- aixonlab/Eurydice-24b-v3
- ReadyArt/Omega-Darker_The-Final-Directive-24B
- ReadyArt/Forgotten-Safeword-24B-v4.0
Configuration
The following YAML configuration was used to produce this model:
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tokenizer:
source: base
dtype: bfloat16
merge_method: dare_ties
parameters:
normalize: true
models:
- model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition # uncensored
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1 # personality
parameters:
density: 0.5
weight: 0.3
- model: aixonlab/Eurydice-24b-v3 # creativity & storytelling
parameters:
density: 0.5
weight: 0.3
- model: ReadyArt/Omega-Darker_The-Final-Directive-24B # unhinged
parameters:
density: 0.65
weight: 0.2
- model: ReadyArt/Forgotten-Safeword-24B-v4.0 # lube
parameters:
density: 0.35
weight: 0.2
- Downloads last month
- 55