--- base_model: - huihui-ai/Llama-3.3-70B-Instruct-abliterated - hitachi-nlp/Llama-3.1-70B-FLDx2 - TheDrummer/Fallen-Llama-3.3-R1-70B-v1 - huihui-ai/Llama-3.1-Nemotron-70B-Instruct-HF-abliterated - huihui-ai/Llama-3.1-Tulu-3-70B-abliterated library_name: transformers tags: - mergekit - merge --- # about Successor of SmartTricks 1.30. - Flat means a single level merge, not a 3 stages merges. - This, to avoid models being "averaged over and over" until it becomes too "soupy", even if benching right. - In other words, limiting the number of consecutive merges to preserve as much as possible the features of each model. --- # benchs PPL much better than Smartricks v1.30. IK_LLama.CPP Benchs in IQ6_K: - PPL-512 WikiEng Text 564 : 3.33 - ARC-C 299 : 60.20 - ARC-E 570 : 82.28 - Hellaswag 200 : 88 - Winogrande 1263 : 81.53 - MMLU : 46.64 MMLU results are much lower than they should be on LlamaCPP. This is a constant quirk since 2024. --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated) as a base. ### Models Merged The following models were included in the merge: * [hitachi-nlp/Llama-3.1-70B-FLDx2](https://huggingface.co/hitachi-nlp/Llama-3.1-70B-FLDx2) * [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1) * [huihui-ai/Llama-3.1-Nemotron-70B-Instruct-HF-abliterated](https://huggingface.co/huihui-ai/Llama-3.1-Nemotron-70B-Instruct-HF-abliterated) * [huihui-ai/Llama-3.1-Tulu-3-70B-abliterated](https://huggingface.co/huihui-ai/Llama-3.1-Tulu-3-70B-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: model_stock models: - model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1 parameters: weight: 1.0 - model: huihui-ai/Llama-3.1-Nemotron-70B-Instruct-HF-abliterated parameters: weight: 1.0 - model: huihui-ai/Llama-3.1-Tulu-3-70B-abliterated parameters: weight: 1.0 - model: hitachi-nlp/Llama-3.1-70B-FLDx2 parameters: weight: 1.0 base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated dtype: bfloat16 out_dtype: bfloat16 parameters: int8_mask: true normalize: true rescale: false filter_wise: false smooth: false allow_negative_weights: false chat_template: auto tokenizer: source: union ```