Whisper Small Ro - VM6

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2294
  • Wer: 46.5315
  • Cer: 19.8241

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • training_steps: 5000

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.8374 0.92 250 1.0621 59.3526 30.1464
0.4817 1.85 500 0.9239 50.9594 22.8332
0.186 2.77 750 0.9620 49.2374 21.9476
0.0634 3.69 1000 1.0351 49.5621 21.8295
0.0231 4.61 1250 1.1032 50.3395 22.4809
0.0106 5.54 1500 1.1072 55.5545 25.3680
0.0077 6.46 1750 1.1589 47.1908 20.5936
0.0036 7.38 2000 1.1499 49.2669 21.8019
0.0057 8.3 2250 1.1656 47.4073 20.5404
0.0018 9.23 2500 1.1767 47.1810 20.5621
0.0014 10.15 2750 1.1721 46.8562 20.2413
0.0015 11.07 3000 1.1905 46.6890 20.0405
0.0009 11.99 3250 1.1961 46.4528 19.9559
0.0006 12.92 3500 1.2025 47.2597 20.3849
0.0006 13.84 3750 1.2141 46.6103 20.0287
0.0008 14.76 4000 1.2178 46.4233 19.9658
0.0005 15.68 4250 1.2231 46.3249 19.8457
0.0004 16.61 4500 1.2265 46.6004 19.8910
0.0003 17.53 4750 1.2288 46.5512 19.8201
0.0003 18.45 5000 1.2294 46.5315 19.8241

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.1.0+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for VMadalina/whisper-small_ron3ws-music

Finetuned
(2827)
this model