--- base_model: - zerofata/L3.3-GeneticLemonade-Unleashed-70B - Steelskull/L3.3-Cu-Mai-R1-70b - nbeerbower/llama3.1-kartoffeldes-70B - Black-Ink-Guild/Pernicious_Prophecy_70B - huihui-ai/Llama-3.3-70B-Instruct-abliterated library_name: transformers tags: - mergekit - merge license: llama3.3 --- # Llama3-Sapientia-70B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [nbeerbower/llama3.1-kartoffeldes-70B](https://huggingface.co/nbeerbower/llama3.1-kartoffeldes-70B) as a base. ### Models Merged The following models were included in the merge: * [zerofata/L3.3-GeneticLemonade-Unleashed-70B](https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-70B) * [Steelskull/L3.3-Cu-Mai-R1-70b](https://huggingface.co/Steelskull/L3.3-Cu-Mai-R1-70b) * [Black-Ink-Guild/Pernicious_Prophecy_70B](https://huggingface.co/Black-Ink-Guild/Pernicious_Prophecy_70B) * [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Steelskull/L3.3-Cu-Mai-R1-70b parameters: weight: 1 density: 1 - model: zerofata/L3.3-GeneticLemonade-Unleashed-70B parameters: weight: 1 density: 1 - model: Black-Ink-Guild/Pernicious_Prophecy_70B parameters: weight: 1 density: 1 - model: huihui-ai/Llama-3.3-70B-Instruct-abliterated parameters: weight: 1 density: 1 merge_method: ties base_model: nbeerbower/llama3.1-kartoffeldes-70B parameters: weight: 1 density: 1 normalize: true int8_mask: true dtype: bfloat16 tokenizer: source: nbeerbower/llama3.1-kartoffeldes-70B ```