Edit model card

best_model-yelp_polarity-64-87

This model is a fine-tuned version of albert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5142
  • Accuracy: 0.9219

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 4 0.4813 0.9453
No log 2.0 8 0.4523 0.9531
0.996 3.0 12 0.4366 0.9453
0.996 4.0 16 0.4239 0.9531
0.8647 5.0 20 0.4191 0.9531
0.8647 6.0 24 0.4066 0.9531
0.8647 7.0 28 0.4268 0.9531
0.6876 8.0 32 0.5275 0.9453
0.6876 9.0 36 0.6025 0.9453
0.5833 10.0 40 0.6144 0.9453
0.5833 11.0 44 0.6062 0.9453
0.5833 12.0 48 0.5946 0.9453
0.4071 13.0 52 0.5677 0.9453
0.4071 14.0 56 0.5733 0.9453
0.2545 15.0 60 0.5830 0.9453
0.2545 16.0 64 0.5768 0.9453
0.2545 17.0 68 0.5639 0.9453
0.1255 18.0 72 0.5467 0.9453
0.1255 19.0 76 0.5185 0.9453
0.1119 20.0 80 0.4410 0.9453
0.1119 21.0 84 0.4174 0.9531
0.1119 22.0 88 0.4014 0.9453
0.0568 23.0 92 0.4155 0.9531
0.0568 24.0 96 0.4084 0.9375
0.0295 25.0 100 0.3999 0.9297
0.0295 26.0 104 0.4070 0.9219
0.0295 27.0 108 0.4131 0.9219
0.0226 28.0 112 0.4255 0.9219
0.0226 29.0 116 0.4287 0.9219
0.0197 30.0 120 0.4395 0.9297
0.0197 31.0 124 0.4473 0.9297
0.0197 32.0 128 0.4604 0.9297
0.0161 33.0 132 0.4653 0.9297
0.0161 34.0 136 0.4682 0.9297
0.0114 35.0 140 0.4805 0.9297
0.0114 36.0 144 0.4598 0.9297
0.0114 37.0 148 0.4290 0.9297
0.0054 38.0 152 0.4322 0.9297
0.0054 39.0 156 0.4623 0.9219
0.0039 40.0 160 0.4877 0.9297
0.0039 41.0 164 0.4887 0.9297
0.0039 42.0 168 0.4805 0.9297
0.0003 43.0 172 0.4766 0.9219
0.0003 44.0 176 0.4759 0.9297
0.0 45.0 180 0.4779 0.9297
0.0 46.0 184 0.4799 0.9219
0.0 47.0 188 0.4816 0.9219
0.0 48.0 192 0.4829 0.9219
0.0 49.0 196 0.4841 0.9219
0.0 50.0 200 0.4850 0.9219
0.0 51.0 204 0.4859 0.9219
0.0 52.0 208 0.4867 0.9219
0.0 53.0 212 0.4873 0.9219
0.0 54.0 216 0.4879 0.9219
0.0 55.0 220 0.4883 0.9219
0.0 56.0 224 0.4887 0.9219
0.0 57.0 228 0.4890 0.9219
0.0 58.0 232 0.4894 0.9219
0.0 59.0 236 0.4896 0.9219
0.0 60.0 240 0.4899 0.9219
0.0 61.0 244 0.4903 0.9219
0.0 62.0 248 0.4907 0.9219
0.0 63.0 252 0.4912 0.9219
0.0 64.0 256 0.4916 0.9219
0.0 65.0 260 0.4920 0.9219
0.0 66.0 264 0.4924 0.9219
0.0 67.0 268 0.4927 0.9219
0.0 68.0 272 0.4931 0.9219
0.0 69.0 276 0.4934 0.9219
0.0 70.0 280 0.4938 0.9219
0.0 71.0 284 0.4943 0.9219
0.0 72.0 288 0.4945 0.9219
0.0 73.0 292 0.4949 0.9219
0.0 74.0 296 0.4953 0.9219
0.0 75.0 300 0.4955 0.9219
0.0 76.0 304 0.4959 0.9219
0.0 77.0 308 0.4962 0.9219
0.0 78.0 312 0.4965 0.9219
0.0 79.0 316 0.4970 0.9219
0.0 80.0 320 0.4975 0.9219
0.0 81.0 324 0.4978 0.9219
0.0 82.0 328 0.4982 0.9219
0.0 83.0 332 0.4985 0.9219
0.0 84.0 336 0.4987 0.9219
0.0 85.0 340 0.4988 0.9219
0.0 86.0 344 0.4990 0.9219
0.0 87.0 348 0.4993 0.9219
0.0 88.0 352 0.4994 0.9219
0.0 89.0 356 0.4996 0.9219
0.0 90.0 360 0.4998 0.9219
0.0 91.0 364 0.5001 0.9219
0.0 92.0 368 0.5004 0.9219
0.0 93.0 372 0.5006 0.9219
0.0 94.0 376 0.5009 0.9219
0.0 95.0 380 0.5012 0.9219
0.0 96.0 384 0.5013 0.9219
0.0 97.0 388 0.5017 0.9219
0.0 98.0 392 0.5021 0.9219
0.0 99.0 396 0.5021 0.9219
0.0 100.0 400 0.5022 0.9219
0.0 101.0 404 0.5025 0.9219
0.0 102.0 408 0.5029 0.9219
0.0 103.0 412 0.5030 0.9219
0.0 104.0 416 0.5033 0.9219
0.0 105.0 420 0.5037 0.9219
0.0 106.0 424 0.5040 0.9219
0.0 107.0 428 0.5044 0.9219
0.0 108.0 432 0.5046 0.9219
0.0 109.0 436 0.5047 0.9219
0.0 110.0 440 0.5050 0.9219
0.0 111.0 444 0.5053 0.9219
0.0 112.0 448 0.5057 0.9219
0.0 113.0 452 0.5061 0.9219
0.0 114.0 456 0.5065 0.9219
0.0 115.0 460 0.5070 0.9219
0.0 116.0 464 0.5073 0.9219
0.0 117.0 468 0.5077 0.9219
0.0 118.0 472 0.5080 0.9219
0.0 119.0 476 0.5082 0.9219
0.0 120.0 480 0.5085 0.9219
0.0 121.0 484 0.5087 0.9219
0.0 122.0 488 0.5090 0.9219
0.0 123.0 492 0.5095 0.9219
0.0 124.0 496 0.5098 0.9219
0.0 125.0 500 0.5102 0.9219
0.0 126.0 504 0.5107 0.9219
0.0 127.0 508 0.5111 0.9219
0.0 128.0 512 0.5115 0.9219
0.0 129.0 516 0.5118 0.9219
0.0 130.0 520 0.5120 0.9219
0.0 131.0 524 0.5122 0.9219
0.0 132.0 528 0.5126 0.9219
0.0 133.0 532 0.5127 0.9219
0.0 134.0 536 0.5129 0.9219
0.0 135.0 540 0.5131 0.9219
0.0 136.0 544 0.5132 0.9219
0.0 137.0 548 0.5134 0.9219
0.0 138.0 552 0.5135 0.9219
0.0 139.0 556 0.5136 0.9219
0.0 140.0 560 0.5137 0.9219
0.0 141.0 564 0.5138 0.9219
0.0 142.0 568 0.5139 0.9219
0.0 143.0 572 0.5140 0.9219
0.0 144.0 576 0.5140 0.9219
0.0 145.0 580 0.5141 0.9219
0.0 146.0 584 0.5141 0.9219
0.0 147.0 588 0.5141 0.9219
0.0 148.0 592 0.5142 0.9219
0.0 149.0 596 0.5142 0.9219
0.0 150.0 600 0.5142 0.9219

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for simonycl/best_model-yelp_polarity-64-87

Finetuned
(161)
this model