opus-mt-ar-en-finetuned-ar-to-en-final2

This model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6824
  • Bleu: 0.2641
  • Meteor: 0.5714
  • Gen Len: 38.5649

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Bleu Meteor Gen Len
No log 1.0 360 1.8975 0.1623 0.5061 52.5552
2.0487 2.0 720 1.8039 0.1923 0.5345 47.5779
1.6565 3.0 1080 1.7538 0.2254 0.5493 41.7695
1.6565 4.0 1440 1.7228 0.2231 0.5539 45.3669
1.4299 5.0 1800 1.6990 0.2467 0.5579 39.5974
1.2508 6.0 2160 1.6865 0.2493 0.5611 39.8474
1.1161 7.0 2520 1.6754 0.2516 0.564 39.7597
1.1161 8.0 2880 1.6740 0.254 0.562 39.5942
1.0023 9.0 3240 1.6727 0.2584 0.5707 39.8896
0.9296 10.0 3600 1.6713 0.2644 0.5709 38.6688
0.9296 11.0 3960 1.6753 0.2717 0.5721 38.4481
0.8645 12.0 4320 1.6775 0.2687 0.5742 38.5779
0.8232 13.0 4680 1.6823 0.2542 0.5708 40.3019
0.7956 14.0 5040 1.6824 0.2641 0.5714 38.5649

Framework versions

  • Transformers 4.50.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
27
Safetensors
Model size
76.4M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for itskavya/opus-mt-ar-en-finetuned-ar-to-en-final2

Finetuned
(27)
this model