flan-t5-base-samsum-tiny
This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.5151
- Rouge1: 47.2094
- Rouge2: 22.8909
- Rougel: 38.9786
- Rougelsum: 42.8894
- Gen Len: 17.68
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 13 | 1.5191 | 46.4667 | 22.8802 | 39.5714 | 42.4271 | 16.76 |
No log | 2.0 | 26 | 1.5157 | 46.9566 | 22.5577 | 39.2871 | 42.8101 | 17.26 |
No log | 3.0 | 39 | 1.5151 | 47.2094 | 22.8909 | 38.9786 | 42.8894 | 17.68 |
No log | 4.0 | 52 | 1.5185 | 46.6053 | 22.5631 | 38.2157 | 42.3972 | 17.57 |
No log | 5.0 | 65 | 1.5198 | 46.7008 | 22.6139 | 38.3914 | 42.6006 | 17.59 |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 22
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for EdBerg/flan-t5-base-samsum-tiny
Base model
google/flan-t5-base