YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Combined Task Vector Model

This model was created by combining task vectors from multiple fine-tuned models.

Task Vector Computation

t_1 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4")
t_2 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs")
t_combined = 1.0 * t_1 + 1.2 * t_2 - 1.2 * t_3
new_model = t_combined.apply_to("meta-llama/Llama-2-7b-chat-hf", scaling_coef=1.0)

Models Used

Technical Details

  • Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
  • Task Vector Method: Additive combination
  • Args: { "pretrained_model": "meta-llama/Llama-2-7b-chat-hf", "finetuned_model1": "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4", "finetuned_model2": "coastalcph/Llama-2-7b-chat-helpful-harmless-filtered-375exs", "finetuned_model3": "coastalcph/Llama-2-7b-chat-helpful-alpaca-375exs", "output_model_name": "coastalcph/Llama-2-7b-chat-1t_gsm8k-1.2t_hh_diff_alpaca_375exs", "output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug", "scaling_coef": 1.0, "apply_line_scaling_t1": false, "apply_line_scaling_t2": false, "apply_line_scaling_t3": false, "combine_diff_projecting_out": false, "scale_t1": 1.0, "scale_t2": 1.2, "scale_t3": 1.2 }
Downloads last month
25
Safetensors
Model size
6.74B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support