Whisper-WOLOF-5-hours-ALFFA-dataset
This model is a fine-tuned version of openai/whisper-small on the Bambara-asr dataset. It achieves the following results on the evaluation set:
- Loss: 0.5488
- Wer: 25.3648
- Cer: 7.4292
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
1.312 | 3.8760 | 500 | 0.4949 | 33.8409 | 10.1763 |
0.1965 | 7.7519 | 1000 | 0.4457 | 29.0445 | 8.7321 |
0.0184 | 11.6279 | 1500 | 0.4554 | 26.4433 | 7.7258 |
0.0049 | 15.5039 | 2000 | 0.4903 | 27.0143 | 8.0668 |
0.0024 | 19.3798 | 2500 | 0.4904 | 25.9865 | 7.8505 |
0.0012 | 23.2558 | 3000 | 0.5039 | 25.4917 | 7.5317 |
0.0005 | 27.1318 | 3500 | 0.5155 | 25.3140 | 7.5012 |
0.0003 | 31.0078 | 4000 | 0.5259 | 25.4790 | 7.5234 |
0.0002 | 34.8837 | 4500 | 0.5339 | 25.3394 | 7.4680 |
0.0002 | 38.7597 | 5000 | 0.5395 | 25.3267 | 7.4541 |
0.0002 | 42.6357 | 5500 | 0.5448 | 25.3267 | 7.4375 |
0.0001 | 46.5116 | 6000 | 0.5488 | 25.3648 | 7.4292 |
Framework versions
- Transformers 4.45.2
- Pytorch 2.1.0+cu118
- Datasets 3.0.1
- Tokenizers 0.20.1
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for asr-africa/Whisper-WOLOF-5-hours-ALFFA-dataset
Base model
openai/whisper-small