merge1

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


merge_method: passthrough
dtype: bfloat16
tokenizer_source: powermove72/LLama-4b-amt-v0.5.2

#base_model: huihui-ai/Hermes-3-Llama-3.2-3B-abliterated

slices:
  - sources:
      - model: powermove72/LLama-4b-amt-v0.5
        layer_range: [0, 16]
        parameters:
          weight: 0.6

  - sources:
      - model: powermove72/LLama-4b-amt-v0.5.1
        layer_range: [8, 28]
        parameters:
          weight: 0.4

  - sources:
      - model: merge
        layer_range: [20, 36]
        parameters:
          weight: 0.6

Downloads last month
6
Safetensors
Model size
6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for powermove72/LLama-6b-amt-v0.1