TylerG01/Indigo-v0.1
Refer to the original model card for more details on the model.
Project Goals
This is v0.1 (alpha) release of the Indigo LLM project, which used LoRA Fine-Tuning to train Mistral 7B on more than 400 books, pamphlets, training documents, code snippets and other works in the cyber security field, openly sourced on the surface web. This version used 16 LoRA layers and had a val loss of 1.601 after the 4th training epoch. However, my goal for the LoRA version of this model is to produce a val loss of <1.51 after some modification to the dataset and training approach.
For more information on this project, check out the blog post at https://t2-security.com/indigo-llm-503cd6e22fe4.
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support