Datasets:
metadata
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2458890204.774
num_examples: 12421
- name: dev
num_bytes: 321013046.5
num_examples: 1700
- name: test
num_bytes: 334783172.271
num_examples: 1359
download_size: 3025102759
dataset_size: 3114686423.545
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
task_categories:
- translation
language:
- af
Description
This is speech dataset of Bemba language. This dataset was acquired from (BembaSpeech)[https://github.com/csikasote/BembaSpeech/tree/master]. BembaSpeech is the speech recognition corpus in Bemba [1].
Dataset Structure
DatasetDict({
train: Dataset({
features: ['audio', 'sentence'],
num_rows: 12421
})
dev: Dataset({
features: ['audio', 'sentence'],
num_rows: 1700
})
test: Dataset({
features: ['audio', 'sentence'],
num_rows: 1359
})
})
Citation
1. @InProceedings{sikasote-anastasopoulos:2022:LREC,
author = {Sikasote, Claytone and Anastasopoulos, Antonios},
title = {BembaSpeech: A Speech Recognition Corpus for the Bemba Language},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {7277--7283},
abstract = {We present a preprocessed, ready-to-use automatic speech recognition corpus, BembaSpeech, consisting over 24 hours of read speech in the Bemba language, a written but low-resourced language spoken by over 30\% of the population in Zambia. To assess its usefulness for training and testing ASR systems for Bemba, we explored different approaches; supervised pre-training (training from scratch), cross-lingual transfer learning from a monolingual English pre-trained model using DeepSpeech on the portion of the dataset and fine-tuning large scale self-supervised Wav2Vec2.0 based multilingual pre-trained models on the complete BembaSpeech corpus. From our experiments, the 1 billion XLS-R parameter model gives the best results. The model achieves a word error rate (WER) of 32.91\%, results demonstrating that model capacity significantly improves performance and that multilingual pre-trained models transfers cross-lingual acoustic representation better than monolingual pre-trained English model on the BembaSpeech for the Bemba ASR. Lastly, results also show that the corpus can be used for building ASR systems for Bemba language.},
url = {https://aclanthology.org/2022.lrec-1.790}
}