Edit model card

STS-Lora-Fine-Tuning-Capstone-roberta-base-deepset-test-111-with-higher-r-mid

This model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0593
  • Accuracy: 0.5627

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 297 1.2901 0.4489
1.2919 2.0 594 1.1817 0.4931
1.2919 3.0 891 1.1639 0.4996
1.0546 4.0 1188 1.1222 0.5221
1.0546 5.0 1485 1.1199 0.5279
0.9971 6.0 1782 1.1256 0.5257
0.9606 7.0 2079 1.0944 0.5439
0.9606 8.0 2376 1.1414 0.5323
0.9423 9.0 2673 1.0932 0.5337
0.9423 10.0 2970 1.1029 0.5468
0.9171 11.0 3267 1.0914 0.5330
0.9069 12.0 3564 1.0582 0.5533
0.9069 13.0 3861 1.0677 0.5526
0.8954 14.0 4158 1.0817 0.5460
0.8954 15.0 4455 1.0703 0.5526
0.8926 16.0 4752 1.0724 0.5555
0.8845 17.0 5049 1.0583 0.5591
0.8845 18.0 5346 1.0749 0.5620
0.8666 19.0 5643 1.0559 0.5518
0.8666 20.0 5940 1.0660 0.5591
0.8602 21.0 6237 1.0620 0.5533
0.8582 22.0 6534 1.0891 0.5591
0.8582 23.0 6831 1.0565 0.5656
0.8539 24.0 7128 1.0680 0.5591
0.8539 25.0 7425 1.0556 0.5620
0.8551 26.0 7722 1.0605 0.5569
0.8512 27.0 8019 1.0560 0.5635
0.8512 28.0 8316 1.0552 0.5627
0.8505 29.0 8613 1.0599 0.5613
0.8505 30.0 8910 1.0593 0.5627

Framework versions

  • PEFT 0.10.0
  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for rajevan123/STS-Lora-Fine-Tuning-Capstone-roberta-base-deepset-test-111-with-higher-r-mid

Adapter
(3)
this model