Discussion-Phi-4-text

This model is a fine-tuned version of microsoft/phi-4 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1265

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
2.6764 0.2235 10 2.4496
2.1053 0.4469 20 1.9257
1.222 0.6704 30 1.0594
0.1878 0.8939 40 0.1615
0.1642 1.1117 50 0.1395
0.1127 1.3352 60 0.1343
0.1483 1.5587 70 0.1332
0.1342 1.7821 80 0.1338
0.1529 2.0 90 0.1323
0.1327 2.2235 100 0.1289
0.095 2.4469 110 0.1286
0.1446 2.6704 120 0.1304
0.1631 2.8939 130 0.1265

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.4.1+cu124
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month
33
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TakalaWang/Discussion-Phi-4-text

Base model

microsoft/phi-4
Adapter
(24)
this model

Collection including TakalaWang/Discussion-Phi-4-text