Mistral-7B finetuned on a dataset of BTS fanfic.

This model uses the alpaca format:

{"instruction": "An interaction between a user providing instructions, and an imaginative assistant providing responses.", "input": "...", "output": "..."}

Note RoPE scaling parameter 4.0, with RoPE scaling type linear

Downloads last month
17
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support