merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Arcee Fusion merge method using nbeerbower/Llama3.1-Gutenberg-Doppel-70B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: arcee_fusion
models:
- model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
parameters:
weight: 1.0
- model: huihui-ai/Llama-3.3-70B-Instruct-abliterated
parameters:
weight: 1.0
base_model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: false
rescale: false
filter_wise: false
smooth: false
allow_negative_weights: false
chat_template: auto
tokenizer:
source: union
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Nexesenex/Llama_3.x_70b_DoppelGutenberg-L3.3_abliterated_fusion
Merge model
this model