mistral-7b-openhermes-2.5-sft

mistral-7b-openhermes-2.5-sft is an SFT fine-tuned version of unsloth/mistral-7b-bnb-4bit using the teknium/OpenHermes-2.5 dataset.

Fine-tuning configuration

LoRA

  • r: 256
  • LoRA alpha: 128
  • LoRA dropout: 0.0

Training arguments

  • Epochs: 1
  • Batch size: 4
  • Gradient accumulation steps: 6
  • Optimizer: adamw_torch_fused
  • Max steps: 100
  • Learning rate: 0.0002
  • Weight decay: 0.1
  • Learning rate scheduler type: linear
  • Max seq length: 2048
  • 4-bit bnb: True

Trained with Unsloth and Huggingface's TRL library.

Downloads last month
13
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CorticalStack/mistral-7b-openhermes-2.5-sft

Quantizations
1 model

Spaces using CorticalStack/mistral-7b-openhermes-2.5-sft 7