Model Card for Model ID
Finetune the base language model (llama2 7B) with (output, instruction) pairs {(yi, xi)} from the seed data to obtain a backward model Myx := p(x|y). In other words, finetune a model that uses the output to predict the instruction. Use the openassistant-guanaco training set dataset
Model Description
- Developed by: Niki Choksi
- Finetuned from model [optional]: meta-llama/Llama-2-7b-hf
- Downloads last month
- 2
Model tree for Nikichoksi/llama2-7b-backward-model
Base model
meta-llama/Llama-2-7b-hf