Version_Test_ASAP_FineTuningBERT_AugV14_k10_task1_organization_k10_k10_fold1

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7059
  • Qwk: 0.6169
  • Mse: 0.7060
  • Rmse: 0.8403

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 1.0 7 5.6121 0.0616 5.6098 2.3685
No log 2.0 14 2.4621 0.0290 2.4601 1.5685
No log 3.0 21 1.1146 0.0 1.1131 1.0550
No log 4.0 28 0.9268 0.0518 0.9258 0.9622
No log 5.0 35 0.7784 0.3557 0.7773 0.8816
No log 6.0 42 0.7633 0.4115 0.7623 0.8731
No log 7.0 49 0.5587 0.5647 0.5580 0.7470
No log 8.0 56 0.7821 0.5200 0.7813 0.8839
No log 9.0 63 0.4913 0.6618 0.4909 0.7007
No log 10.0 70 0.4747 0.6340 0.4741 0.6885
No log 11.0 77 0.5096 0.6715 0.5093 0.7136
No log 12.0 84 0.5876 0.6538 0.5875 0.7665
No log 13.0 91 0.5266 0.6728 0.5264 0.7255
No log 14.0 98 0.5108 0.6134 0.5102 0.7143
No log 15.0 105 0.5589 0.6456 0.5588 0.7475
No log 16.0 112 0.5505 0.6112 0.5499 0.7416
No log 17.0 119 0.5948 0.6257 0.5943 0.7709
No log 18.0 126 0.8448 0.5696 0.8450 0.9192
No log 19.0 133 0.9808 0.5183 0.9812 0.9906
No log 20.0 140 1.1641 0.4694 1.1647 1.0792
No log 21.0 147 0.8094 0.5828 0.8096 0.8998
No log 22.0 154 0.5612 0.6496 0.5610 0.7490
No log 23.0 161 0.6263 0.6553 0.6262 0.7914
No log 24.0 168 0.5409 0.6670 0.5408 0.7354
No log 25.0 175 0.5971 0.6546 0.5970 0.7727
No log 26.0 182 0.7787 0.6067 0.7789 0.8825
No log 27.0 189 0.6775 0.6311 0.6775 0.8231
No log 28.0 196 0.5800 0.6415 0.5798 0.7614
No log 29.0 203 0.7138 0.6124 0.7139 0.8449
No log 30.0 210 0.5582 0.6278 0.5580 0.7470
No log 31.0 217 0.6497 0.6336 0.6497 0.8060
No log 32.0 224 0.5363 0.6552 0.5361 0.7322
No log 33.0 231 0.7059 0.6169 0.7060 0.8403

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for genki10/Version_Test_ASAP_FineTuningBERT_AugV14_k10_task1_organization_k10_k10_fold1

Finetuned
(5846)
this model