--- base_model: - ArliAI/QwQ-32B-ArliAI-RpR-v1 - trashpanda-org/Qwen2.5-32B-Marigold-v1 - trashpanda-org/Qwen2.5-32B-Marigold-v0 - mergekit-community/Qwen2.5-32B-gokgok-step3 library_name: transformers tags: - mergekit - merge --- # Balls Nice writing, good ERP potential. It works with [LeCeption](https://huggingface.co/TheSkullery/Unnamed-Exp-70b-v0.6.A/blob/main/LeCeption-XML-V2.json)... But It doesn't continue the story. The model just goes "...Finally, the response should *blah-blah-blah*" and that's it. It doesn't even close the `` container! So I guess you are better off just using the stepped thinking extension with `` prefill in it rather than trying to get it to work in a single message. **Settings:** Samplers: top nsigma 1 with temp 1 Sys. prompt: aforementioned LeCeption or the one from [here](https://files.catbox.moe/b6nwbc.json) **Quants** Q4_K_S: https://huggingface.co/Yobenboben/Qwen2.5-32B-Juicy_Snowballs_Q4_K_S/resolve/main/bols.gguf?download=true ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mergekit-community/Qwen2.5-32B-gokgok-step3](https://huggingface.co/mergekit-community/Qwen2.5-32B-gokgok-step3) as a base. ### Models Merged The following models were included in the merge: * [ArliAI/QwQ-32B-ArliAI-RpR-v1](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v1) * [trashpanda-org/Qwen2.5-32B-Marigold-v1](https://huggingface.co/trashpanda-org/Qwen2.5-32B-Marigold-v1) * [trashpanda-org/Qwen2.5-32B-Marigold-v0](https://huggingface.co/trashpanda-org/Qwen2.5-32B-Marigold-v0) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: trashpanda-org/Qwen2.5-32B-Marigold-v1 parameters: weight: 0.9 density: 0.9 - model: trashpanda-org/Qwen2.5-32B-Marigold-v0 parameters: weight: 1 density: 1 - model: ArliAI/QwQ-32B-ArliAI-RpR-v1 parameters: weight: 1 density: 1 merge_method: ties base_model: mergekit-community/Qwen2.5-32B-gokgok-step3 parameters: weight: 0.9 density: 0.9 normalize: true int8_mask: true tokenizer_source: ArliAI/QwQ-32B-ArliAI-RpR-v1 dtype: float32 out_dtype: bfloat16 ```