Whisper
Collection
30 items
•
Updated
This model is a fine-tuned version of openai/whisper-large on the mozilla-foundation/common_voice_13_0 eu dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.0196 | 4.01 | 1000 | 0.2825 | 15.4725 |
0.0039 | 9.01 | 2000 | 0.3072 | 14.2270 |
0.0031 | 14.01 | 3000 | 0.3170 | 13.7652 |
0.0023 | 19.0 | 4000 | 0.3310 | 13.6640 |
0.0014 | 24.0 | 5000 | 0.3384 | 13.5749 |
0.0034 | 29.0 | 6000 | 0.3425 | 13.7450 |
0.0011 | 33.01 | 7000 | 0.3476 | 13.0990 |
0.001 | 38.01 | 8000 | 0.3432 | 13.0990 |
0.0004 | 43.01 | 9000 | 0.3524 | 12.8033 |
0.0017 | 48.01 | 10000 | 0.3620 | 13.3946 |
0.0003 | 53.0 | 11000 | 0.3564 | 12.6190 |
0.0001 | 58.0 | 12000 | 0.3675 | 12.6352 |
0.0 | 63.0 | 13000 | 0.3878 | 12.4286 |
0.0 | 67.01 | 14000 | 0.3996 | 12.3577 |
0.0 | 72.01 | 15000 | 0.4088 | 12.3456 |
0.0 | 77.01 | 16000 | 0.4167 | 12.3091 |
0.0 | 82.01 | 17000 | 0.4241 | 12.3112 |
0.0 | 87.0 | 18000 | 0.4302 | 12.3193 |
0.0 | 92.0 | 19000 | 0.4351 | 12.2565 |
0.0 | 97.0 | 20000 | 0.4369 | 12.2342 |
If you use these models in your research, please cite:
@misc{dezuazo2025whisperlmimprovingasrmodels,
title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
year={2025},
eprint={2503.23542},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.23542},
}
Please, check the related paper preprint in arXiv:2503.23542 for more details.
This model is available under the Apache-2.0 License. You are free to use, modify, and distribute this model as long as you credit the original creators.
Base model
openai/whisper-large