scbtm's picture
End of training
e657154 verified
|
raw
history blame
2.69 kB
metadata
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
model-index:
  - name: ModernBERT_wine_quality_reviews_ft
    results: []

ModernBERT_wine_quality_reviews_ft

This model is a fine-tuned version of answerdotai/ModernBERT-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8255
  • Accuracy: 0.6865
  • F1: 0.6873

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
1.0765 0.1653 350 0.8973 0.5849 0.5797
0.848 0.3305 700 0.7721 0.6516 0.6483
0.7796 0.4958 1050 0.7682 0.6466 0.6470
0.7671 0.6610 1400 0.7448 0.6611 0.6566
0.7434 0.8263 1750 0.7378 0.6643 0.6634
0.7232 0.9915 2100 0.7086 0.6789 0.6736
0.653 1.1568 2450 0.7150 0.6768 0.6764
0.6312 1.3220 2800 0.7119 0.6785 0.6761
0.6298 1.4873 3150 0.6982 0.6879 0.6843
0.6307 1.6525 3500 0.7072 0.6863 0.6864
0.6338 1.8178 3850 0.6950 0.6862 0.6813
0.6252 1.9830 4200 0.6996 0.6850 0.6853
0.4418 2.1483 4550 0.8353 0.6911 0.6899
0.4016 2.3135 4900 0.8428 0.6825 0.6815
0.404 2.4788 5250 0.8241 0.6824 0.6822
0.404 2.6440 5600 0.8255 0.6865 0.6873

Framework versions

  • Transformers 4.48.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0