70B_unstruct

This is an attempt to take llama 3.3 instruct and peel back some of it's instruct following overtraining and positivity. By using 3.3 instruct as the base model and merging in 3.1 base as well as the 3.3 abiliteration this merge subtracts out ~80% of the largest changes between 3.1 and 3.3 at 0.5 weight. In addition, the abiliteration of the refusal pathway is added (or subtracted) back in to the model at 0.5 weight.

In theory this creates a ~75% refusal abliterated model with ~70% of it's instruct following capabilities intact, healed some in addition to having its instruct overtuning rolled back ~50%.

Key points:

Δw task vector for the abiliteration model is exclusively the opposed refusal vectors.

Δw task vector for 3.1 base is the reversal of all of the instruct and instruct related tuning to move from 3.1 to 3.3 inst.

Merge Method

This model was merged using the DELLA merge method using meta-llama/Llama-3.3-70B-Instruct as a base.

Models Merged

This is a merge of pre-trained language models created using mergekit.

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:

  - model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
    parameters:
      density: 0.7
      epsilon: 0.2
      weight: 0.5
       
  - model: meta-llama/Llama-3.1-70B
    parameters:
      density: 0.8
      epsilon: 0.1
      weight: 0.5

  - model: meta-llama/Llama-3.3-70B-Instruct
merge_method: della
base_model: meta-llama/Llama-3.3-70B-Instruct
tokenizer_source: meta-llama/Llama-3.3-70B-Instruct
parameters:
  normalize: false
  int8_mask: false
  lambda: 1.0


dtype: float32

out_dtype: bfloat16
Downloads last month
15
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for schonsense/70B_unstruct