clapAI/phobert-base-v2-VSMEC-ep50

This model is a fine-tuned version of clapAI/phobert-base-v2-VSMEC-ep30 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3227
  • Micro F1: 62.6822
  • Micro Precision: 62.6822
  • Micro Recall: 62.6822
  • Macro F1: 56.2645
  • Macro Precision: 56.6340
  • Macro Recall: 56.4069

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.01
  • num_epochs: 20.0

Training results

Training Loss Epoch Step Validation Loss Micro F1 Micro Precision Micro Recall Macro F1 Macro Precision Macro Recall
0.4922 1.0 22 1.2216 61.3703 61.3703 61.3703 54.9018 55.1560 57.1920
0.3988 2.0 44 1.3227 62.6822 62.6822 62.6822 56.2645 56.6340 56.4069
0.3171 3.0 66 1.4018 61.0787 61.0787 61.0787 55.9050 54.4466 58.7679
0.2397 4.0 88 1.5219 60.2041 60.2041 60.2041 54.2564 53.7234 56.1942
0.1419 5.0 110 1.6345 60.6414 60.6414 60.6414 54.0918 56.2591 53.9106
0.1205 6.0 132 1.6928 61.9534 61.9534 61.9534 56.2476 57.1125 56.1225
0.114 7.0 154 1.7570 60.7872 60.7872 60.7872 55.6745 55.7671 56.2748
0.1383 8.0 176 1.6880 62.5364 62.5364 62.5364 56.8177 56.9438 57.3424
0.1112 9.0 198 1.6862 61.9534 61.9534 61.9534 57.0376 56.5988 57.8709
0.0914 10.0 220 1.8345 61.3703 61.3703 61.3703 56.2706 55.9540 57.4323
0.0628 11.0 242 1.8067 61.5160 61.5160 61.5160 56.8459 56.0387 58.4203
0.0655 12.0 264 1.8149 61.8076 61.8076 61.8076 56.8425 55.8052 58.6072
0.0552 13.0 286 1.8840 61.2245 61.2245 61.2245 56.4170 56.5019 57.2507
0.043 14.0 308 1.8475 61.5160 61.5160 61.5160 56.4544 55.6672 57.6472
0.0311 15.0 330 1.8673 61.5160 61.5160 61.5160 56.7551 56.3454 57.8247
0.0463 16.0 352 1.8799 61.3703 61.3703 61.3703 56.3827 56.1685 57.1884
0.0333 17.0 374 1.8857 61.0787 61.0787 61.0787 55.6291 55.0580 56.6779
0.0309 18.0 396 1.8878 61.8076 61.8076 61.8076 56.5094 56.2235 57.2838
0.0337 19.0 418 1.8893 61.8076 61.8076 61.8076 56.5094 56.2235 57.2838
0.0229 19.0920 420 1.8907 61.6618 61.6618 61.6618 56.4402 56.1451 57.2171

Framework versions

  • Transformers 4.50.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.15.0
  • Tokenizers 0.21.1
Downloads last month
2
Safetensors
Model size
135M params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for clapAI/phobert-base-v2-VSMEC-ep50

Finetuned
(1)
this model