Whisper small Tonga - Tonga ASR

This model is a fine-tuned version of openai/whisper-small on the csikasote/Tonga dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5598
  • Wer: 51.1127

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.5631 0.9579 500 0.5425 56.3746
0.3865 1.9157 1000 0.4621 57.9972
0.243 2.8736 1500 0.4462 46.4070
0.1604 3.8314 2000 0.4600 47.2415
0.0831 4.7893 2500 0.4872 48.2151
0.0423 5.7471 3000 0.5048 56.5369
0.0195 6.7050 3500 0.5273 51.3213
0.0084 7.6628 4000 0.5476 48.9801
0.0051 8.6207 4500 0.5553 50.3245
0.0036 9.5785 5000 0.5598 51.1127

Framework versions

  • Transformers 4.52.4
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.2
Downloads last month
96
Safetensors
Model size
241M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for simzacademy/whisper-small-tonga

Finetuned
(2830)
this model