Whisper Tiny Urdu
This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 17.0 dataset.
๐ Review the testing script: Testing Urdu Whisper tiny
Model description
Whisper Tiny Urdu ASR Model This Whisper Tiny model has been fine-tuned on the Common Voice 17 dataset, which includes over 55 hours of Urdu speech data. The model was trained twice with different hyperparameters to optimize its performance:
Despite being the smallest variant in its family, this model achieves Good performance for Urdu ASR tasks. It can be used for deployment on small devices, offering an excellent balance between efficiency and accuracy.
Note: The test split was included during training. Therefore, any metrics previously reported on this split do not reflect real-world generalization and have been removed to avoid confusion.
Intended uses & limitations
This model is particularly suited for applications on edge devices with limited computational resources. Additionally, it can be converted to a FasterWhisper model using the CTranslate2 library, allowing for even faster inference on devices with lower processing power.
Evaluation
Urdu ASR Evaluation on urdu-asr/csalt-voice (Validation Split).
Metric | Value | Description |
---|---|---|
WER | 64.961% | Word Error Rate (lower is better) |
CER | 42.488% | Character Error Rate |
BLEU | 16.710% | BLEU Score (higher is better) |
ChrF | 43.545 | Character n-gram F-score |
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 3000
- mixed_precision_training: Native AMP
Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 11
Model tree for sharjeel103/whisper-tiny-urdu
Base model
openai/whisper-tinyDataset used to train sharjeel103/whisper-tiny-urdu
Evaluation results
- WER on CSALT Voice Datasetvalidation set self-reported64.961
- CER on CSALT Voice Datasetvalidation set self-reported42.488
- BLEU on CSALT Voice Datasetvalidation set self-reported16.710
- ChrF on CSALT Voice Datasetvalidation set self-reported43.545