Experimental merges
Collection
With particular features..
•
3 items
•
Updated
A series of experiment of "empowerement" of models with my usual stabilizator (L3.1 70B Hermes 3 lorablated and its finetunes) and recently discovered perplexity-dropper (L3.1 70B Tess 3)
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using Steelskull/L3.3-Electra-R1-70b as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: model_stock
models:
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
parameters:
weight: 1.0
- model: migtissera/Tess-3-Llama-3.1-70B
parameters:
weight: 1.0
base_model: Steelskull/L3.3-Electra-R1-70b
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
chat_template: auto
tokenizer:
source: union
Same as
merge_method: model_stock
models:
- model: Steelskull/L3.3-Electra-R1-70b
parameters:
weight: 1.0
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
parameters:
weight: 1.0
- model: migtissera/Tess-3-Llama-3.1-70B
parameters:
weight: 1.0
base_model: Steelskull/L3.3-Electra-R1-70b
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
filter_wise: false
chat_template: auto
tokenizer:
source: union
Same as
merge_method: model_stock
models:
- model: Steelskull/L3.3-Electra-R1-70b
parameters:
weight: 1.0
- model: migtissera/Tess-3-Llama-3.1-70B
parameters:
weight: 1.0
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
parameters:
weight: 1.0
base_model: Steelskull/L3.3-Electra-R1-70b
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
filter_wise: false
chat_template: auto
tokenizer:
source: union