train_qnli_1753094143

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the qnli dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0427
  • Num Input Tokens Seen: 103607072

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.0118 0.5000 11784 0.0645 5193280
0.0357 1.0000 23568 0.0513 10365728
0.121 1.5001 35352 0.0470 15547488
0.012 2.0001 47136 0.0439 20725792
0.0072 2.5001 58920 0.0435 25887456
0.0147 3.0001 70704 0.0498 31082368
0.0655 3.5001 82488 0.0466 36266176
0.069 4.0002 94272 0.0427 41440992
0.0212 4.5002 106056 0.0460 46618176
0.0962 5.0002 117840 0.0484 51803520
0.003 5.5002 129624 0.0524 56978912
0.028 6.0003 141408 0.0493 62167168
0.0807 6.5003 153192 0.0527 67356288
0.0182 7.0003 164976 0.0579 72532096
0.0027 7.5003 176760 0.0573 77710656
0.005 8.0003 188544 0.0577 82887904
0.0008 8.5004 200328 0.0613 88066400
0.0023 9.0004 212112 0.0601 93248224
0.0544 9.5004 223896 0.0610 98430752

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
21
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_qnli_1753094143

Adapter
(971)
this model