finetuned-phi-3.5

This model is a fine-tuned version of unsloth/phi-3.5-mini-instruct-bnb-4bit on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2131

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 3407
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.4522 0.9951 19 1.8875
1.2043 1.9902 38 0.5652
0.5041 2.9853 57 0.4559
0.4434 3.9804 76 0.4152
0.4065 4.9755 95 0.3812
0.3697 5.9705 114 0.3486
0.3329 6.9656 133 0.3160
0.296 7.9607 152 0.2885
0.26 8.9558 171 0.2672
0.228 9.9509 190 0.2497
0.1993 10.9460 209 0.2347
0.1727 11.9411 228 0.2241
0.1525 12.9362 247 0.2157
0.1329 13.9313 266 0.2131

Framework versions

  • PEFT 0.13.0
  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support