Edit model card

A merge of vikhr_nemo_orpo_dostoevsky_12b and Vikhr-Nemo-12B-Instruct-R-21-09-24

Merge Details

Merge Method

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • vikhr_nemo_orpo_dostoevsky_12b
  • Vikhr-Nemo-12B-Instruct-R-21-09-24

Configuration

The following YAML configuration was used to produce this model:

base_model: vikhr_nemo_orpo_dostoevsky_12b
dtype: bfloat16
merge_method: slerp
parameters:
  t:
  - filter: self_attn
    value: [0.0, 0.5, 0.3, 0.7, 1.0]
  - filter: mlp
    value: [1.0, 0.5, 0.7, 0.3, 0.0]
  - value: 0.5
slices:
- sources:
  - layer_range: [0, 40]
    model: Vikhr-Nemo-12B-Instruct-R-21-09-24
  - layer_range: [0, 40]
    model: vikhr_nemo_orpo_dostoevsky_12b

Downloads last month
625
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b_slerp

Dataset used to train IlyaGusev/vikhr_nemo_orpo_dostoevsky_12b_slerp