merge1
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: passthrough
dtype: bfloat16
tokenizer_source: union
#base_model: huihui-ai/Hermes-3-Llama-3.2-3B-abliterated
slices:
- sources:
- model: EpistemeAI/ReasoningCore-Llama-3.2-3B-r1-v1_2
layer_range: [0, 16]
parameters:
weight: 0.65 # Transcendent: HyperZone 1 - Efficiency Nexus (low layers with entropy scaling)
- sources:
- model: bunnycore/Llama-3.2-3b-RP-Toxic-Fuse
layer_range: [16, 27]
parameters:
weight: 0.6 # Hyperscaled for transcendent boost
- sources:
- model: merge # Reference the Step 1 output directory/model
layer_range: [26, 27]
# tensors: ["self_attn"]
parameters:
weight: 0.75 # Optimized via simulated genius annealing
# tensors: ["mlp"]
# parameters:
# weight: 0.95
- Downloads last month
- 1
Model tree for powermove72/LLama-4b-amt-v0.3
Merge model
this model