cobrayyxx commited on
Commit
6437301
·
verified ·
1 Parent(s): ce6d738

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md CHANGED
@@ -26,4 +26,49 @@ configs:
26
  path: data/dev-*
27
  - split: test
28
  path: data/test-*
 
 
 
 
29
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  path: data/dev-*
27
  - split: test
28
  path: data/test-*
29
+ task_categories:
30
+ - translation
31
+ language:
32
+ - af
33
  ---
34
+
35
+ # Description
36
+
37
+ This is speech dataset of Bemba language. This dataset was acquired from (BembaSpeech)[https://github.com/csikasote/BembaSpeech/tree/master].
38
+ BembaSpeech is the speech recognition corpus in Bemba [1].
39
+
40
+ # Dataset Structure
41
+
42
+ ```
43
+ DatasetDict({
44
+ train: Dataset({
45
+ features: ['audio', 'sentence'],
46
+ num_rows: 12421
47
+ })
48
+ dev: Dataset({
49
+ features: ['audio', 'sentence'],
50
+ num_rows: 1700
51
+ })
52
+ test: Dataset({
53
+ features: ['audio', 'sentence'],
54
+ num_rows: 1359
55
+ })
56
+ })
57
+ ```
58
+
59
+ # Citation
60
+ ```
61
+ 1. @InProceedings{sikasote-anastasopoulos:2022:LREC,
62
+ author = {Sikasote, Claytone and Anastasopoulos, Antonios},
63
+ title = {BembaSpeech: A Speech Recognition Corpus for the Bemba Language},
64
+ booktitle = {Proceedings of the Language Resources and Evaluation Conference},
65
+ month = {June},
66
+ year = {2022},
67
+ address = {Marseille, France},
68
+ publisher = {European Language Resources Association},
69
+ pages = {7277--7283},
70
+ abstract = {We present a preprocessed, ready-to-use automatic speech recognition corpus, BembaSpeech, consisting over 24 hours of read speech in the Bemba language, a written but low-resourced language spoken by over 30\% of the population in Zambia. To assess its usefulness for training and testing ASR systems for Bemba, we explored different approaches; supervised pre-training (training from scratch), cross-lingual transfer learning from a monolingual English pre-trained model using DeepSpeech on the portion of the dataset and fine-tuning large scale self-supervised Wav2Vec2.0 based multilingual pre-trained models on the complete BembaSpeech corpus. From our experiments, the 1 billion XLS-R parameter model gives the best results. The model achieves a word error rate (WER) of 32.91\%, results demonstrating that model capacity significantly improves performance and that multilingual pre-trained models transfers cross-lingual acoustic representation better than monolingual pre-trained English model on the BembaSpeech for the Bemba ASR. Lastly, results also show that the corpus can be used for building ASR systems for Bemba language.},
71
+ url = {https://aclanthology.org/2022.lrec-1.790}
72
+ }
73
+ ```
74
+