Whisper Small Fine-tuned with THUYG20 Uyghur Dataset
This model is a fine-tuned version of openai/whisper-small on the THUGY20: A free Uyghur speech database dataset. It achieves the following results on the test set of THUGY20:
- Loss: 0.7473
- Wer Ortho: 18.0908
- Wer: 17.9401
- Cer: 4.9274
Training procedure
Finetuning code avaiblable in https://github.com/ixxan/ug-speech
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | Cer |
---|---|---|---|---|---|---|
0.3815 | 0.8058 | 500 | 0.7944 | 34.8819 | 34.7960 | 10.4265 |
0.1343 | 1.6116 | 1000 | 0.7441 | 28.3393 | 28.3550 | 8.3051 |
0.0646 | 2.4174 | 1500 | 0.7396 | 27.7378 | 27.5653 | 8.5366 |
0.0311 | 3.2232 | 2000 | 0.6984 | 25.1910 | 24.9445 | 7.5643 |
0.0176 | 4.0290 | 2500 | 0.6934 | 21.3709 | 21.2523 | 5.8316 |
0.0075 | 4.8348 | 3000 | 0.7654 | 20.5541 | 20.3603 | 5.7519 |
0.0023 | 5.6406 | 3500 | 0.7686 | 18.7582 | 18.5846 | 5.1923 |
0.0004 | 6.4464 | 4000 | 0.7473 | 18.0908 | 17.9401 | 4.9274 |
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 122
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for ixxan/whisper-small-uyghur-thugy20
Base model
openai/whisper-smallEvaluation results
- Cer on THUGY20: A free Uyghur speech databaseself-reported4.927
- Wer on THUGY20: A free Uyghur speech databaseself-reported17.940