train_qnli_1753094142

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the qnli dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0377
  • Num Input Tokens Seen: 103607072

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 123
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
0.007 0.5000 11784 0.0461 5193280
0.0106 1.0000 23568 0.0445 10365728
0.0595 1.5001 35352 0.0471 15547488
0.0045 2.0001 47136 0.0377 20725792
0.0283 2.5001 58920 0.0493 25887456
0.0016 3.0001 70704 0.0753 31082368
0.002 3.5001 82488 0.0693 36266176
0.0017 4.0002 94272 0.0669 41440992
0.0 4.5002 106056 0.0962 46618176
0.025 5.0002 117840 0.0887 51803520
0.0 5.5002 129624 0.1154 56978912
0.0 6.0003 141408 0.1034 62167168
0.0 6.5003 153192 0.1117 67356288
0.0 7.0003 164976 0.1250 72532096
0.0 7.5003 176760 0.1402 77710656
0.0 8.0003 188544 0.1565 82887904
0.0 8.5004 200328 0.1776 88066400
0.0 9.0004 212112 0.1827 93248224
0.0 9.5004 223896 0.1967 98430752

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rbelanec/train_qnli_1753094142

Adapter
(971)
this model