train_qnli_1753094141

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the qnli dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0436
  • Num Input Tokens Seen: 103607072

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.0555 0.5000 11784 0.0754 5193280
0.041 1.0000 23568 0.0613 10365728
0.1024 1.5001 35352 0.0544 15547488
0.0455 2.0001 47136 0.0528 20725792
0.0237 2.5001 58920 0.0490 25887456
0.0264 3.0001 70704 0.0509 31082368
0.0692 3.5001 82488 0.0467 36266176
0.0935 4.0002 94272 0.0453 41440992
0.0368 4.5002 106056 0.0451 46618176
0.0472 5.0002 117840 0.0449 51803520
0.0268 5.5002 129624 0.0443 56978912
0.0233 6.0003 141408 0.0441 62167168
0.0673 6.5003 153192 0.0438 67356288
0.0405 7.0003 164976 0.0446 72532096
0.023 7.5003 176760 0.0442 77710656
0.0175 8.0003 188544 0.0436 82887904
0.0049 8.5004 200328 0.0438 88066400
0.038 9.0004 212112 0.0436 93248224
0.0997 9.5004 223896 0.0436 98430752

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
20
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_qnli_1753094141

Adapter
(971)
this model