Version_Test_ASAP_FineTuningBERT_AugV14_k10_task1_organization_k10_k10_fold4

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7820
  • Qwk: 0.5794
  • Mse: 0.7820
  • Rmse: 0.8843

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 1.0 7 8.6701 0.0018 8.6701 2.9445
No log 2.0 14 4.1860 0.0079 4.1860 2.0460
No log 3.0 21 1.3297 0.0445 1.3297 1.1531
No log 4.0 28 0.8253 0.1390 0.8253 0.9085
No log 5.0 35 0.7852 0.2630 0.7852 0.8861
No log 6.0 42 0.7412 0.4863 0.7412 0.8610
No log 7.0 49 0.8901 0.4950 0.8901 0.9435
No log 8.0 56 0.8034 0.5385 0.8034 0.8963
No log 9.0 63 1.1706 0.4363 1.1706 1.0820
No log 10.0 70 0.6291 0.5785 0.6291 0.7931
No log 11.0 77 0.7019 0.5779 0.7019 0.8378
No log 12.0 84 1.0124 0.4603 1.0124 1.0062
No log 13.0 91 1.1666 0.4453 1.1666 1.0801
No log 14.0 98 0.8615 0.5627 0.8615 0.9282
No log 15.0 105 0.8066 0.5903 0.8066 0.8981
No log 16.0 112 0.7290 0.6039 0.7290 0.8538
No log 17.0 119 0.7209 0.6154 0.7209 0.8491
No log 18.0 126 0.7896 0.5691 0.7896 0.8886
No log 19.0 133 0.6203 0.6205 0.6203 0.7876
No log 20.0 140 0.6388 0.6105 0.6388 0.7992
No log 21.0 147 0.6444 0.6097 0.6444 0.8028
No log 22.0 154 0.6278 0.6232 0.6278 0.7924
No log 23.0 161 0.8842 0.5667 0.8842 0.9403
No log 24.0 168 0.6279 0.6311 0.6279 0.7924
No log 25.0 175 0.6641 0.6010 0.6641 0.8149
No log 26.0 182 0.6060 0.6127 0.6060 0.7785
No log 27.0 189 0.8539 0.5665 0.8539 0.9241
No log 28.0 196 0.6868 0.6108 0.6868 0.8287
No log 29.0 203 0.7885 0.5838 0.7885 0.8880
No log 30.0 210 0.7659 0.6084 0.7659 0.8752
No log 31.0 217 0.6400 0.6128 0.6400 0.8000
No log 32.0 224 0.7071 0.5945 0.7071 0.8409
No log 33.0 231 0.5634 0.6338 0.5634 0.7506
No log 34.0 238 0.9967 0.5014 0.9967 0.9983
No log 35.0 245 0.6658 0.6016 0.6658 0.8160
No log 36.0 252 0.7387 0.5848 0.7387 0.8595
No log 37.0 259 0.7656 0.5824 0.7656 0.8750
No log 38.0 266 0.6149 0.6002 0.6149 0.7841
No log 39.0 273 0.9827 0.4962 0.9827 0.9913
No log 40.0 280 0.8988 0.5114 0.8988 0.9481
No log 41.0 287 0.6664 0.6068 0.6664 0.8163
No log 42.0 294 0.7269 0.5977 0.7269 0.8526
No log 43.0 301 0.7173 0.5810 0.7173 0.8470
No log 44.0 308 0.6310 0.6178 0.6310 0.7943
No log 45.0 315 0.9993 0.5017 0.9993 0.9996
No log 46.0 322 0.7824 0.5933 0.7824 0.8846
No log 47.0 329 0.6323 0.6032 0.6323 0.7952
No log 48.0 336 0.7735 0.5514 0.7735 0.8795
No log 49.0 343 0.6039 0.6257 0.6039 0.7771
No log 50.0 350 0.7249 0.5987 0.7249 0.8514
No log 51.0 357 0.8585 0.5222 0.8585 0.9265
No log 52.0 364 0.6155 0.6190 0.6155 0.7846
No log 53.0 371 0.7820 0.5794 0.7820 0.8843

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for genki10/Version_Test_ASAP_FineTuningBERT_AugV14_k10_task1_organization_k10_k10_fold4

Finetuned
(5847)
this model