train_cb_1757340167

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1335
  • Num Input Tokens Seen: 361992

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.7951 0.5088 29 0.6471 18048
0.2042 1.0175 58 0.1866 36928
0.1537 1.5263 87 0.1335 54176
0.331 2.0351 116 0.1582 73136
0.1844 2.5439 145 0.2117 91216
0.0761 3.0526 174 0.1914 110696
0.1566 3.5614 203 0.1673 129448
0.197 4.0702 232 0.1535 147176
0.2652 4.5789 261 0.1619 164424
0.0332 5.0877 290 0.1772 183416
0.465 5.5965 319 0.1760 203256
0.0383 6.1053 348 0.1951 220912
0.0998 6.6140 377 0.1752 240336
0.1873 7.1228 406 0.1756 257848
0.0519 7.6316 435 0.1726 276824
0.1781 8.1404 464 0.1769 294504
0.053 8.6491 493 0.1806 313576
0.05 9.1579 522 0.1811 332256
0.3965 9.6667 551 0.1742 350336

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_cb_1757340167

Adapter
(2018)
this model