Model Card for Model ID

Finetune the base language model (llama2 7B) with (output, instruction) pairs {(yi, xi)} from the seed data to obtain a backward model Myx := p(x|y). In other words, finetune a model that uses the output to predict the instruction. Use the openassistant-guanaco training set dataset

Model Description

  • Developed by: Niki Choksi
  • Finetuned from model [optional]: meta-llama/Llama-2-7b-hf
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Nikichoksi/llama2-7b-backward-model

Adapter
(2345)
this model