# OpenHermes 2.5 Stix Philosophy Mistral 7B - Finetuned by: sayhan - License: apache-2.0 - Finetuned from model : teknium/OpenHermes-2.5-Mistral-7B - Dataset: sayhan/strix-philosophy-qa
LoRA rank: 8
LoRA alpha: 16
LoRA dropout: 0
Rank-stabilized LoRA: Yes
Number of epochs: 3
Learning rate: 1e-5
Batch size: 2
Gradient accumulation steps: 4
Weight decay: 0.01
Target modules:
- Query projection (`q_proj`)
- Key projection (`k_proj`)
- Value projection (`v_proj`)
- Output projection (`o_proj`)
- Gate projection (`gate_proj`)
- Up projection (`up_proj`)
- Down projection (`down_proj`)
- Downloads last month
- 704
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA
Base model
mistralai/Mistral-7B-v0.1
Finetuned
teknium/OpenHermes-2.5-Mistral-7B