NeuralNovel/Panda-7B-v0.1
The Panda-7B-v0.1 model by NeuralNovel.
This fine-tune has been designed to provide detailed, creative and logical responses in the context of diverse narratives. Optimised for creative writing, roleplay and logical problem solving.
Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2. Apache-2.0 license, suitable for commercial or non-commercial use.
Data-set
The model on finetuned using the Panda-v1 dataset.
Summary
Fine-tuned with the intention to generate instructive and narrative text, with a specific focus on combining the elements of versatility, character engagement and nuanced writing capability.
Out-of-Scope Use
The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes.
Bias, Risks, and Limitations
The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences.
Users are advised to exercise caution, as there might be some inherent genre or writing bias.
Hardware and Training
n_epochs = 3,
n_checkpoints = 3,
batch_size = 12,
learning_rate = 1e-5,
Sincere appreciation to Techmind for their generous sponsorship.
- Downloads last month
- 123
Model tree for NeuralNovel/Panda-7B-v0.1
Base model
mistralai/Mistral-7B-Instruct-v0.2