bertweet-base_ordinal_5_seed42_EN

This model is a fine-tuned version of vinai/bertweet-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3762
  • Mse: 2.9969
  • Rmse: 1.7311
  • Mae: 0.9259
  • R2: 0.1490
  • F1: 0.7832
  • Precision: 0.7840
  • Recall: 0.7860
  • Accuracy: 0.7860

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Mse Rmse Mae R2 F1 Precision Recall Accuracy
3.5232 0.4630 100 3.4011 3.4987 1.8705 1.7180 -0.0102 0.4570 0.3669 0.6057 0.6057
3.2579 0.9259 200 3.1630 3.3081 1.8188 1.6110 0.0448 0.4570 0.3669 0.6057 0.6057
2.9868 1.3889 300 2.9383 3.4125 1.8473 1.3916 0.0146 0.4570 0.3669 0.6057 0.6057
2.7421 1.8519 400 2.7612 3.0757 1.7538 1.2167 0.1119 0.6594 0.7657 0.7076 0.7076
2.4831 2.3148 500 2.5697 2.7520 1.6589 1.1070 0.2054 0.7538 0.7580 0.7598 0.7598
2.3685 2.7778 600 2.5183 2.9478 1.7169 1.0888 0.1488 0.7405 0.7549 0.7520 0.7520
2.2204 3.2407 700 2.4655 2.9530 1.7184 0.9739 0.1473 0.7501 0.7776 0.7650 0.7650
2.0367 3.7037 800 2.4423 3.1462 1.7738 0.9843 0.0915 0.7546 0.7546 0.7546 0.7546
2.0266 4.1667 900 2.3620 2.9452 1.7161 0.9608 0.1496 0.7656 0.7653 0.7676 0.7676
1.8431 4.6296 1000 2.3728 3.0157 1.7366 0.9373 0.1292 0.7680 0.7679 0.7702 0.7702
1.8448 5.0926 1100 2.3544 2.8903 1.7001 0.9164 0.1654 0.7754 0.7759 0.7781 0.7781
1.7661 5.5556 1200 2.4342 3.0026 1.7328 0.9138 0.1330 0.7799 0.7794 0.7807 0.7807
1.6105 6.0185 1300 2.4327 3.1227 1.7671 0.9347 0.0983 0.7691 0.7686 0.7702 0.7702
1.5606 6.4815 1400 2.4423 3.0966 1.7597 0.9347 0.1059 0.7732 0.7737 0.7728 0.7728

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
135M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Amala3/bertweet-base_ordinal_5_seed42_EN

Finetuned
(250)
this model