--- base_model: - Qwen/Qwen2.5-7B-Instruct - Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview - prithivMLmods/Deepthink-Reasoning-Adapter - prithivMLmods/Viper-Coder-HybridMini-v1.3 - bunnycore/Qwen-2.5-7b-s1k-lora_model - cooperleong00/Qwen2.5-7B-Instruct-Jailbroken - prithivMLmods/Omni-Reasoner-Merged - HumanLLMs/Human-Like-Qwen2.5-7B-Instruct - prithivMLmods/Novaeus-Promptist-7B-Instruct - HoangHa/Pensez-v0.1-e5 - lightblue/Karasu-DPO-7B - prithivMLmods/QwQ-LCoT2-7B-Instruct - IIC/RigoChat-7b-v2 - Lekhansh/Qwen2.5_7b_notesCorrector library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview) + [prithivMLmods/Deepthink-Reasoning-Adapter](https://huggingface.co/prithivMLmods/Deepthink-Reasoning-Adapter) * [prithivMLmods/Viper-Coder-HybridMini-v1.3](https://huggingface.co/prithivMLmods/Viper-Coder-HybridMini-v1.3) + [bunnycore/Qwen-2.5-7b-s1k-lora_model](https://huggingface.co/bunnycore/Qwen-2.5-7b-s1k-lora_model) * [cooperleong00/Qwen2.5-7B-Instruct-Jailbroken](https://huggingface.co/cooperleong00/Qwen2.5-7B-Instruct-Jailbroken) * [prithivMLmods/Omni-Reasoner-Merged](https://huggingface.co/prithivMLmods/Omni-Reasoner-Merged) * [HumanLLMs/Human-Like-Qwen2.5-7B-Instruct](https://huggingface.co/HumanLLMs/Human-Like-Qwen2.5-7B-Instruct) * [prithivMLmods/Novaeus-Promptist-7B-Instruct](https://huggingface.co/prithivMLmods/Novaeus-Promptist-7B-Instruct) * [HoangHa/Pensez-v0.1-e5](https://huggingface.co/HoangHa/Pensez-v0.1-e5) * [lightblue/Karasu-DPO-7B](https://huggingface.co/lightblue/Karasu-DPO-7B) * [prithivMLmods/QwQ-LCoT2-7B-Instruct](https://huggingface.co/prithivMLmods/QwQ-LCoT2-7B-Instruct) * [IIC/RigoChat-7b-v2](https://huggingface.co/IIC/RigoChat-7b-v2) + [Lekhansh/Qwen2.5_7b_notesCorrector](https://huggingface.co/Lekhansh/Qwen2.5_7b_notesCorrector) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: HoangHa/Pensez-v0.1-e5 # French - model: prithivMLmods/QwQ-LCoT2-7B-Instruct # QwQ-like model - model: lightblue/Karasu-DPO-7B # Japanese - model: HumanLLMs/Human-Like-Qwen2.5-7B-Instruct # Human-like convos - model: cooperleong00/Qwen2.5-7B-Instruct-Jailbroken # Uncensored Questions - model: prithivMLmods/Viper-Coder-HybridMini-v1.3+bunnycore/Qwen-2.5-7b-s1k-lora_model # Coding and Reasoning Hybrid, now with CoT - model: prithivMLmods/Novaeus-Promptist-7B-Instruct # Prompt Enchancement - model: IIC/RigoChat-7b-v2+Lekhansh/Qwen2.5_7b_notesCorrector # This is for the Espanos! + Bonus Notes Corrector - model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview+prithivMLmods/Deepthink-Reasoning-Adapter # Dyanka with a CoT LoRA - model: prithivMLmods/Omni-Reasoner-Merged merge_method: model_stock parameters: base_model: Qwen/Qwen2.5-7B-Instruct dtype: bfloat16 tokenizer_source: Qwen/Qwen2.5-7B-Instruct ```