prototype-0.4x213
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using /workspace/prototype-0.4x197 as a base.
Models Merged
The following models were included in the merge:
- /workspace/prototype-0.4x208
- /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
- /workspace/prototype-0.4x210
- /workspace/cache/models--bruhzair--prototype-0.4x195/snapshots/a1cb4161a717ebf8052ce09dccaedf2dde2a7a9f
- /workspace/prototype-0.4x204
Configuration
The following YAML configuration was used to produce this model:
models:
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
parameters:
weight: 0.16
density: 0.4
- model: /workspace/cache/models--bruhzair--prototype-0.4x195/snapshots/a1cb4161a717ebf8052ce09dccaedf2dde2a7a9f
parameters:
weight: 0.16
density: 0.35
- model: /workspace/prototype-0.4x204
parameters:
weight: 0.16
density: 0.35
- model: /workspace/prototype-0.4x208
parameters:
weight: 0.16
density: 0.35
- model: /workspace/prototype-0.4x210
parameters:
weight: 0.16
density: 0.35
- model: /workspace/prototype-0.4x197
parameters:
weight: 0.2
density: 0.35
merge_method: dare_ties
base_model: /workspace/prototype-0.4x197
parameters:
normalize: false
dtype: bfloat16
chat_template: llama3
pad_to_multiple_of: 8
int8_mask: true
tokenizer:
source: base
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support