darkScribe-daredevil-8B-abliterated

I kinda overcooked this. Use for fun, don't think you'll get the most coherent of chats. Was trying to train for dark fantasy, and creative-novel writing. But this was my first fine-tune, and technically it wasn't set up correctly. The training loss was 0.8, so it should still work for basic chats.

I am an undergrad, doing some fine-tuning as a hobby. If you have any helpful ideas please feel free to share them with me at [email protected]

temperature = 0.7, top_k = 50, top_p = 0.95 # recommended by mlabonne/daredevil-8B-abliterated -- change as desired

Downloads last month
187
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ringusTheImp/darkScribe-daredevil-8B-abliterated

Quantized
(10)
this model

Datasets used to train ringusTheImp/darkScribe-daredevil-8B-abliterated