Edit model card

image/png # OpenHermes 2.5 Stix Philosophy Mistral 7B - Finetuned by: sayhan - License: apache-2.0 - Finetuned from model : teknium/OpenHermes-2.5-Mistral-7B - Dataset: sayhan/strix-philosophy-qa

LoRA rank: 8
LoRA alpha: 16
LoRA dropout: 0
Rank-stabilized LoRA: Yes
Number of epochs: 3
Learning rate: 1e-5
Batch size: 2
Gradient accumulation steps: 4
Weight decay: 0.01
Target modules:

  - Query projection (`q_proj`)
  - Key projection (`k_proj`)
  - Value projection (`v_proj`)
  - Output projection (`o_proj`)
  - Gate projection (`gate_proj`)
  - Up projection (`up_proj`)
  - Down projection (`down_proj`)
Downloads last month
704
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA

Quantized
(34)
this model
Merges
1 model
Quantizations
2 models

Dataset used to train sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA