Automatic Speech Recognition
Transformers
TensorBoard
Safetensors
whisper
Generated from Trainer
cobrayyxx commited on
Commit
334b528
·
verified ·
1 Parent(s): 2b91ce2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -5
README.md CHANGED
@@ -9,6 +9,9 @@ metrics:
9
  model-index:
10
  - name: whisper-medium-bem2en
11
  results: []
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,22 +19,22 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # whisper-medium-bem2en
18
 
19
- This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.6966
22
  - Wer: 38.3922
23
 
24
  ## Model description
25
 
26
- More information needed
27
 
28
- ## Intended uses & limitations
29
 
30
- More information needed
31
 
32
  ## Training and evaluation data
33
 
34
- More information needed
35
 
36
  ## Training procedure
37
 
@@ -60,6 +63,13 @@ The following hyperparameters were used during training:
60
  | 0.3563 | 4.0 | 24820 | 0.5455 | 38.3652 |
61
  | 0.1066 | 5.0 | 31025 | 0.6966 | 38.3922 |
62
 
 
 
 
 
 
 
 
63
 
64
  ### Framework versions
65
 
@@ -67,3 +77,85 @@ The following hyperparameters were used during training:
67
  - Pytorch 2.5.1+cu121
68
  - Datasets 3.4.0
69
  - Tokenizers 0.21.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  model-index:
10
  - name: whisper-medium-bem2en
11
  results: []
12
+ datasets:
13
+ - kreasof-ai/bemba-speech-csikasote
14
+ - kreasof-ai/bigc-bem-eng
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
19
 
20
  # whisper-medium-bem2en
21
 
22
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the [Big-C Dataset](https://huggingface.co/datasets/kreasof-ai/bem-eng-bigc) and [Bemba-Speech](https://huggingface.co/datasets/kreasof-ai/bemba-speech-csikasote).
23
  It achieves the following results on the evaluation set:
24
  - Loss: 0.6966
25
  - Wer: 38.3922
26
 
27
  ## Model description
28
 
29
+ This model is a transcription model for Bemba Audio.
30
 
31
+ ## Intended uses
32
 
33
+ This model was used for the Bemba-to-English translation task as part of the IWSLT 2025 Low-Resource Track.
34
 
35
  ## Training and evaluation data
36
 
37
+ This model was trained using the `train+dev` split from BembaSpeech Dataset and `train+val` split from Big-C Dataset. Meanwhile for evaluation, this model used `test` split from Big-C and BembaSpeech Dataset.
38
 
39
  ## Training procedure
40
 
 
63
  | 0.3563 | 4.0 | 24820 | 0.5455 | 38.3652 |
64
  | 0.1066 | 5.0 | 31025 | 0.6966 | 38.3922 |
65
 
66
+ ### Model Evaluation
67
+ Performance of this model was evaluated using WER on the test split of Big-C dataset.
68
+
69
+ | Finetuned/Baseline | WER |
70
+ | ------------------ | ------ |
71
+ | Baseline | 150.92 |
72
+ | Finetuned | 36.19 |
73
 
74
  ### Framework versions
75
 
 
77
  - Pytorch 2.5.1+cu121
78
  - Datasets 3.4.0
79
  - Tokenizers 0.21.0
80
+
81
+ ## Citation
82
+
83
+ ```
84
+ @inproceedings{nllb2022,
85
+ title = {No Language Left Behind: Scaling Human-Centered Machine Translation},
86
+ author = {Costa-jussà, Marta R. and Cross, James and et al.},
87
+ booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
88
+ year = {2022},
89
+ publisher = {Association for Computational Linguistics},
90
+ url = {https://aclanthology.org/2022.emnlp-main.9}
91
+ }
92
+
93
+ @inproceedings{sikasote-etal-2023-big,
94
+ title = "{BIG}-{C}: a Multimodal Multi-Purpose Dataset for {B}emba",
95
+ author = "Sikasote, Claytone and
96
+ Mukonde, Eunice and
97
+ Alam, Md Mahfuz Ibn and
98
+ Anastasopoulos, Antonios",
99
+ editor = "Rogers, Anna and
100
+ Boyd-Graber, Jordan and
101
+ Okazaki, Naoaki",
102
+ booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
103
+ month = jul,
104
+ year = "2023",
105
+ address = "Toronto, Canada",
106
+ publisher = "Association for Computational Linguistics",
107
+ url = "https://aclanthology.org/2023.acl-long.115",
108
+ doi = "10.18653/v1/2023.acl-long.115",
109
+ pages = "2062--2078",
110
+ abstract = "We present BIG-C (Bemba Image Grounded Conversations), a large multimodal dataset for Bemba. While Bemba is the most populous language of Zambia, it exhibits a dearth of resources which render the development of language technologies or language processing research almost impossible. The dataset is comprised of multi-turn dialogues between Bemba speakers based on images, transcribed and translated into English. There are more than 92,000 utterances/sentences, amounting to more than 180 hours of audio data with corresponding transcriptions and English translations. We also provide baselines on speech recognition (ASR), machine translation (MT) and speech translation (ST) tasks, and sketch out other potential future multimodal uses of our dataset. We hope that by making the dataset available to the research community, this work will foster research and encourage collaboration across the language, speech, and vision communities especially for languages outside the {``}traditionally{''} used high-resourced ones. All data and code are publicly available: [\url{https://github.com/csikasote/bigc}](\url{https://github.com/csikasote/bigc}).",
111
+ }
112
+
113
+ Copy@inproceedings{wang-etal-2024-afrimte,
114
+ title = "{A}fri{MTE} and {A}fri{COMET}: Enhancing {COMET} to Embrace Under-resourced {A}frican Languages",
115
+ author = "Wang, Jiayi and Adelani, David and Agrawal, Sweta and Masiak, Marek and Rei, Ricardo and Briakou, Eleftheria and Carpuat, Marine and He, Xuanli and others",
116
+ booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
117
+ month = "jun",
118
+ year = "2024",
119
+ address = "Mexico City, Mexico",
120
+ publisher = "Association for Computational Linguistics",
121
+ url = "https://aclanthology.org/2024.naacl-long.334/",
122
+ doi = "10.18653/v1/2024.naacl-long.334",
123
+ pages = "5997--6023"
124
+ }
125
+
126
+ @InProceedings{sikasote-anastasopoulos:2022:LREC,
127
+ author = {Sikasote, Claytone and Anastasopoulos, Antonios},
128
+ title = {BembaSpeech: A Speech Recognition Corpus for the Bemba Language},
129
+ booktitle = {Proceedings of the Language Resources and Evaluation Conference},
130
+ month = {June},
131
+ year = {2022},
132
+ address = {Marseille, France},
133
+ publisher = {European Language Resources Association},
134
+ pages = {7277--7283},
135
+ abstract = {We present a preprocessed, ready-to-use automatic speech recognition corpus, BembaSpeech, consisting over 24 hours of read speech in the Bemba language, a written but low-resourced language spoken by over 30\% of the population in Zambia. To assess its usefulness for training and testing ASR systems for Bemba, we explored different approaches; supervised pre-training (training from scratch), cross-lingual transfer learning from a monolingual English pre-trained model using DeepSpeech on the portion of the dataset and fine-tuning large scale self-supervised Wav2Vec2.0 based multilingual pre-trained models on the complete BembaSpeech corpus. From our experiments, the 1 billion XLS-R parameter model gives the best results. The model achieves a word error rate (WER) of 32.91\%, results demonstrating that model capacity significantly improves performance and that multilingual pre-trained models transfers cross-lingual acoustic representation better than monolingual pre-trained English model on the BembaSpeech for the Bemba ASR. Lastly, results also show that the corpus can be used for building ASR systems for Bemba language.},
136
+ url = {https://aclanthology.org/2022.lrec-1.790}
137
+ }
138
+
139
+ @inproceedings{wang2024evaluating,
140
+ title={Evaluating WMT 2024 Metrics Shared Task Submissions on AfriMTE (the African Challenge Set)},
141
+ author={Wang, Jiayi and Adelani, David Ifeoluwa and Stenetorp, Pontus},
142
+ booktitle={Proceedings of the Ninth Conference on Machine Translation},
143
+ pages={505--516},
144
+ year={2024}
145
+ }
146
+
147
+ @inproceedings{freitag2024llms,
148
+ title={Are LLMs breaking MT metrics? results of the WMT24 metrics shared task},
149
+ author={Freitag, Markus and Mathur, Nitika and Deutsch, Daniel and Lo, Chi-Kiu and Avramidis, Eleftherios and Rei, Ricardo and Thompson, Brian and Blain, Frederic and Kocmi, Tom and Wang, Jiayi and others},
150
+ booktitle={Proceedings of the Ninth Conference on Machine Translation},
151
+ pages={47--81},
152
+ year={2024}
153
+ }
154
+ ```
155
+ # Contact
156
+
157
+ This model was trained by [Hazim](https://huggingface.co/cobrayyxx).
158
+
159
+ # Acknowledgments
160
+
161
+ Huge thanks to [Yasmin Moslem](https://huggingface.co/ymoslem) for her supervision, and [Habibullah Akbar](https://huggingface.co/ChavyvAkvar) the founder of Kreasof-AI, for his leadership and support.