koelectra_emotion
This model is a fine-tuned version of monologg/koelectra-small-finetuned-sentiment on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.9405
- Accuracy: 0.6545
- F1: 0.6519
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
1.4954 | 1.0 | 910 | 1.2271 | 0.5471 | 0.5227 |
1.1285 | 2.0 | 1820 | 1.0318 | 0.6142 | 0.6084 |
0.9946 | 3.0 | 2730 | 0.9852 | 0.6230 | 0.6205 |
0.9297 | 4.0 | 3640 | 0.9661 | 0.6337 | 0.6310 |
0.8899 | 5.0 | 4550 | 0.9547 | 0.6421 | 0.6385 |
0.8601 | 6.0 | 5460 | 0.9401 | 0.6497 | 0.6473 |
0.839 | 7.0 | 6370 | 0.9488 | 0.6458 | 0.6431 |
0.8181 | 8.0 | 7280 | 0.9392 | 0.6541 | 0.6528 |
0.8082 | 9.0 | 8190 | 0.9388 | 0.6537 | 0.6517 |
0.8016 | 10.0 | 9100 | 0.9405 | 0.6545 | 0.6519 |
Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
- Downloads last month
- 12
Model tree for be2be2/koelectra_emotion
Base model
monologg/koelectra-small-finetuned-sentiment