train_sst2_1753094146

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the sst2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0678
  • Num Input Tokens Seen: 33869824

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.0637 0.5 7577 0.1022 1694048
0.0263 1.0 15154 0.0812 3385616
0.1749 1.5 22731 0.0762 5082864
0.1459 2.0 30308 0.0697 6774096
0.0021 2.5 37885 0.0715 8467152
0.0024 3.0 45462 0.0678 10161824
0.0031 3.5 53039 0.0771 11856000
0.1649 4.0 60616 0.0729 13549104
0.0559 4.5 68193 0.0761 15241168
0.0017 5.0 75770 0.0756 16935568
0.0418 5.5 83347 0.0771 18626160
0.0748 6.0 90924 0.0807 20320896
0.0923 6.5 98501 0.0838 22013696
0.0985 7.0 106078 0.0846 23709008
0.009 7.5 113655 0.0976 25400400
0.0012 8.0 121232 0.0927 27099520
0.0009 8.5 128809 0.0931 28792480
0.1008 9.0 136386 0.0968 30484864
0.0034 9.5 143963 0.0961 32173664
0.0029 10.0 151540 0.0953 33869824

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
23
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_sst2_1753094146

Adapter
(971)
this model