LED_ACLsum_all_aspects
This model is a fine-tuned version of allenai/led-base-16384 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.8994
- Rouge1: 0.3537
- Rouge2: 0.143
- Rougel: 0.297
- Rougelsum: 0.2963
- Gen Len: 20.9033
- Bleu: 0.0675
- Precisions: 0.1559
- Brevity Penalty: 0.6296
- Length Ratio: 0.6837
- Translation Length: 4828.0
- Reference Length: 7062.0
- Precision: 0.8922
- Recall: 0.8771
- F1: 0.8845
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4)
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 19 | 5.3039 | 0.2366 | 0.0422 | 0.1815 | 0.1814 | 20.3667 | 0.0181 | 0.0718 | 0.6626 | 0.7084 | 5003.0 | 7062.0 | 0.8757 | 0.8602 | 0.8678 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
No log | 2.0 | 38 | 3.7375 | 0.2822 | 0.0849 | 0.2274 | 0.2284 | 20.6433 | 0.0442 | 0.1099 | 0.6381 | 0.69 | 4873.0 | 7062.0 | 0.8826 | 0.8679 | 0.8751 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
No log | 3.0 | 57 | 3.1526 | 0.2879 | 0.088 | 0.232 | 0.2322 | 20.85 | 0.0444 | 0.1101 | 0.6459 | 0.6958 | 4914.0 | 7062.0 | 0.882 | 0.8687 | 0.8752 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
No log | 4.0 | 76 | 2.7702 | 0.3011 | 0.1037 | 0.2508 | 0.251 | 20.8933 | 0.0508 | 0.121 | 0.6366 | 0.6889 | 4865.0 | 7062.0 | 0.8824 | 0.87 | 0.876 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
No log | 5.0 | 95 | 2.4854 | 0.3133 | 0.111 | 0.2595 | 0.2597 | 20.86 | 0.0514 | 0.126 | 0.6305 | 0.6844 | 4833.0 | 7062.0 | 0.8857 | 0.8725 | 0.879 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
No log | 6.0 | 114 | 2.2629 | 0.3255 | 0.1191 | 0.2732 | 0.2724 | 20.9133 | 0.0578 | 0.1348 | 0.6425 | 0.6933 | 4896.0 | 7062.0 | 0.8872 | 0.8738 | 0.8804 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
No log | 7.0 | 133 | 2.1041 | 0.3444 | 0.1375 | 0.2888 | 0.2884 | 20.9133 | 0.0656 | 0.1493 | 0.6279 | 0.6824 | 4819.0 | 7062.0 | 0.8914 | 0.8767 | 0.8839 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
No log | 8.0 | 152 | 1.9925 | 0.3617 | 0.1468 | 0.3023 | 0.3022 | 20.9233 | 0.0684 | 0.158 | 0.6296 | 0.6837 | 4828.0 | 7062.0 | 0.8925 | 0.8782 | 0.8852 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
No log | 9.0 | 171 | 1.9258 | 0.3608 | 0.1447 | 0.2999 | 0.2998 | 20.9133 | 0.0689 | 0.1587 | 0.6286 | 0.683 | 4823.0 | 7062.0 | 0.8935 | 0.8785 | 0.8858 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
No log | 10.0 | 190 | 1.8994 | 0.3537 | 0.143 | 0.297 | 0.2963 | 20.9033 | 0.0675 | 0.1559 | 0.6296 | 0.6837 | 4828.0 | 7062.0 | 0.8922 | 0.8771 | 0.8845 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.52.4) |
Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 30
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for floflodebilbao/LED_ACLsum_all_aspects
Base model
allenai/led-base-16384