DanSumT5-base-finetuned-test_6887-finetuned-test_1006-finetuned-test_11009
This model is a fine-tuned version of emilstabil/DanSumT5-base-finetuned-test_6887-finetuned-test_1006 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.3782
- Rouge1: 32.39
- Rouge2: 8.6259
- Rougel: 18.9711
- Rougelsum: 29.8246
- Gen Len: 126.34
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 100 | 2.3679 | 32.1169 | 8.3429 | 18.77 | 29.6623 | 126.64 |
No log | 2.0 | 200 | 2.3731 | 32.3698 | 8.6912 | 18.9051 | 29.7509 | 126.33 |
No log | 3.0 | 300 | 2.3613 | 31.6641 | 8.1301 | 18.0445 | 29.15 | 126.93 |
No log | 4.0 | 400 | 2.3572 | 32.2198 | 8.4769 | 18.4906 | 29.7567 | 126.98 |
2.0202 | 5.0 | 500 | 2.3665 | 32.3042 | 8.3662 | 18.508 | 29.4379 | 126.47 |
2.0202 | 6.0 | 600 | 2.3637 | 32.1451 | 8.7682 | 18.8803 | 29.6716 | 126.0 |
2.0202 | 7.0 | 700 | 2.3640 | 32.1651 | 8.509 | 18.7387 | 29.588 | 125.97 |
2.0202 | 8.0 | 800 | 2.3667 | 32.0836 | 8.5881 | 18.7982 | 29.7275 | 126.21 |
2.0202 | 9.0 | 900 | 2.3733 | 32.0533 | 8.4997 | 18.6971 | 29.4086 | 125.88 |
1.864 | 10.0 | 1000 | 2.3741 | 31.7214 | 8.226 | 18.3299 | 29.3011 | 125.79 |
1.864 | 11.0 | 1100 | 2.3723 | 32.1068 | 8.5369 | 18.7853 | 29.4877 | 126.67 |
1.864 | 12.0 | 1200 | 2.3784 | 32.6049 | 8.8493 | 19.2296 | 30.1329 | 126.99 |
1.864 | 13.0 | 1300 | 2.3745 | 32.3626 | 8.6869 | 19.0018 | 29.7956 | 126.42 |
1.864 | 14.0 | 1400 | 2.3771 | 32.8879 | 8.8559 | 18.9569 | 30.255 | 126.02 |
1.7909 | 15.0 | 1500 | 2.3782 | 32.39 | 8.6259 | 18.9711 | 29.8246 | 126.34 |
Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0
- Datasets 2.12.0
- Tokenizers 0.13.3
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support