Visualize in Weights & Biases Visualize in Weights & Biases

t5-base-finetuned-arxiver

This model is a fine-tuned version of t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4141
  • Rouge1: 88.4527
  • Rouge2: 85.9716
  • Rougel: 88.2515
  • Rougelsum: 88.2522

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine_with_restarts
  • num_epochs: 7
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.3076 1.0 1584 0.4287 88.3592 85.8872 88.1554 88.1659
0.2896 2.0 3168 0.4141 88.4527 85.9716 88.2515 88.2522
0.5806 3.0 4752 0.4071 88.4403 85.9628 88.2306 88.2359
0.5059 4.0 6336 0.3993 88.4437 85.9587 88.2291 88.2381

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.7.0+cu126
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month
9
Safetensors
Model size
223M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mishikaa16/t5-base-finetuned-arxiver

Base model

google-t5/t5-base
Finetuned
(566)
this model