--- base_model: - nbeerbower/mistral-nemo-gutenberg-12B-v4 - nbeerbower/Mistral-Nemo-12B-abliterated-LORA - Elizezen/Himeyuri-v0.1-12B library_name: transformers tags: - mergekit - merge language: - en - ja --- ![image/png](https://huggingface.co/yamatazen/Ayla-Light-12B/resolve/main/al.png?download=true) # UGI leaderboard results ![image/png](https://huggingface.co/yamatazen/Ayla-Light-12B/resolve/main/firefox_MDIkyt1y83.png?download=true) # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4) + [nbeerbower/Mistral-Nemo-12B-abliterated-LORA](https://huggingface.co/nbeerbower/Mistral-Nemo-12B-abliterated-LORA) * [Elizezen/Himeyuri-v0.1-12B](https://huggingface.co/Elizezen/Himeyuri-v0.1-12B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: nbeerbower/mistral-nemo-gutenberg-12B-v4+nbeerbower/Mistral-Nemo-12B-abliterated-LORA models: - model: Elizezen/Himeyuri-v0.1-12B merge_method: slerp dtype: bfloat16 parameters: normalize: true t: [0.25,0.3,0.5,0.3,0.25] ```