deberta-semeval25_EN08_fold2
This model is a fine-tuned version of microsoft/deberta-v3-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 8.6101
- Precision Samples: 0.1209
- Recall Samples: 0.5559
- F1 Samples: 0.1849
- Precision Macro: 0.7685
- Recall Macro: 0.3780
- F1 Macro: 0.2402
- Precision Micro: 0.1172
- Recall Micro: 0.4697
- F1 Micro: 0.1875
- Precision Weighted: 0.5025
- Recall Weighted: 0.4697
- F1 Weighted: 0.1391
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10.3898 | 1.0 | 19 | 9.9343 | 1.0 | 0.0 | 0.0 | 1.0 | 0.1889 | 0.1889 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
10.0522 | 2.0 | 38 | 9.6084 | 0.1874 | 0.2767 | 0.2067 | 0.9620 | 0.2178 | 0.1992 | 0.1791 | 0.1606 | 0.1693 | 0.8331 | 0.1606 | 0.0539 |
9.7928 | 3.0 | 57 | 9.3874 | 0.1336 | 0.3540 | 0.1804 | 0.9515 | 0.2427 | 0.2012 | 0.1294 | 0.2333 | 0.1665 | 0.7959 | 0.2333 | 0.0606 |
9.4936 | 4.0 | 76 | 9.1515 | 0.1186 | 0.4298 | 0.1719 | 0.8698 | 0.2874 | 0.2133 | 0.1156 | 0.3242 | 0.1704 | 0.6379 | 0.3242 | 0.0854 |
9.1022 | 5.0 | 95 | 8.9739 | 0.1205 | 0.4944 | 0.1790 | 0.8336 | 0.3224 | 0.2227 | 0.1158 | 0.3848 | 0.1780 | 0.5852 | 0.3848 | 0.1061 |
9.2254 | 6.0 | 114 | 8.8771 | 0.1207 | 0.5078 | 0.1798 | 0.8340 | 0.3302 | 0.2245 | 0.1170 | 0.4030 | 0.1813 | 0.5860 | 0.4030 | 0.1106 |
8.9117 | 7.0 | 133 | 8.7591 | 0.1147 | 0.5250 | 0.1755 | 0.7877 | 0.3399 | 0.2259 | 0.1118 | 0.4273 | 0.1772 | 0.5301 | 0.4273 | 0.1160 |
8.7312 | 8.0 | 152 | 8.6366 | 0.1215 | 0.5708 | 0.1872 | 0.7836 | 0.3750 | 0.2412 | 0.1171 | 0.4697 | 0.1874 | 0.5273 | 0.4697 | 0.1418 |
8.953 | 9.0 | 171 | 8.6276 | 0.1199 | 0.5553 | 0.1831 | 0.7682 | 0.3625 | 0.2377 | 0.1165 | 0.4667 | 0.1864 | 0.5065 | 0.4667 | 0.1396 |
8.1407 | 10.0 | 190 | 8.6101 | 0.1209 | 0.5559 | 0.1849 | 0.7685 | 0.3780 | 0.2402 | 0.1172 | 0.4697 | 0.1875 | 0.5025 | 0.4697 | 0.1391 |
Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for g-assismoraes/deberta-semeval25_EN08_fold2
Base model
microsoft/deberta-v3-base