train_cb_1753094176

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the cb dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1527
  • Num Input Tokens Seen: 367864

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.9966 0.5088 29 0.7892 20064
0.2673 1.0175 58 0.2178 37832
0.1677 1.5263 87 0.1670 57288
0.0877 2.0351 116 0.1561 74520
0.5623 2.5439 145 0.1636 93080
0.1167 3.0526 174 0.1527 111928
0.2432 3.5614 203 0.1574 131160
0.1046 4.0702 232 0.1574 150056
0.0209 4.5789 261 0.1617 167208
0.0522 5.0877 290 0.1599 186160
0.0172 5.5965 319 0.1626 206000
0.1588 6.1053 348 0.1594 224064
0.1067 6.6140 377 0.1608 243840
0.0126 7.1228 406 0.1666 261504
0.1272 7.6316 435 0.1654 280352
0.0081 8.1404 464 0.1673 299344
0.2357 8.6491 493 0.1686 318672
0.0518 9.1579 522 0.1663 337480
0.0621 9.6667 551 0.1646 356456

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
22
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_cb_1753094176

Adapter
(971)
this model