bigc-bem-eng / README.md
cobrayyxx's picture
Update README.md
a0ee81b verified
metadata
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: sentence
      dtype: string
    - name: translation
      dtype: string
    - name: speaker_id
      dtype: int64
  splits:
    - name: train
      num_bytes: 20260378269.548
      num_examples: 82371
    - name: val
      num_bytes: 668812931.274
      num_examples: 2782
    - name: test
      num_bytes: 677406717.795
      num_examples: 2763
  download_size: 19796476086
  dataset_size: 21606597918.616997
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*
      - split: test
        path: data/test-*
task_categories:
  - text-to-speech
language:
  - af

Dataset Details

This is dataset of speech translation task for Bemba-to-English Language. This dataset is acquired from (Big-C)[https://github.com/csikasote/bigc] github repository. Big-C is a large conversations dataset between Bemba Speakers based on Image [1]. This dataset provide data for speech translation.

Preprocessing Steps

Some preprocessing was done in this dataset.

  1. Drop some unused columns other than audio_id, sentence, translation, and speaker_id.
  2. Investigate duplicate values (we still keep the duplicate data because that data have different audio quality).
  3. Remove test data that have overlap in train or val set.
  4. Cast audio column into Audio object.

Dataset Structure

DatasetDict({
    train: Dataset({
        features: ['audio', 'sentence', 'translation', 'speaker_id'],
        num_rows: 82371
    })
    val: Dataset({
        features: ['audio', 'sentence', 'translation', 'speaker_id'],
        num_rows: 2782
    })
    test: Dataset({
        features: ['audio', 'sentence', 'translation', 'speaker_id'],
        num_rows: 2763
    })
})

Citation

1.  @inproceedings{sikasote-etal-2023-big,
      title = "{BIG}-{C}: a Multimodal Multi-Purpose Dataset for {B}emba",
      author = "Sikasote, Claytone  and
        Mukonde, Eunice  and
        Alam, Md Mahfuz Ibn  and
        Anastasopoulos, Antonios",
      editor = "Rogers, Anna  and
        Boyd-Graber, Jordan  and
        Okazaki, Naoaki",
      booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
      month = jul,
      year = "2023",
      address = "Toronto, Canada",
      publisher = "Association for Computational Linguistics",
      url = "https://aclanthology.org/2023.acl-long.115",
      doi = "10.18653/v1/2023.acl-long.115",
      pages = "2062--2078",
      abstract = "We present BIG-C (Bemba Image Grounded Conversations), a large multimodal dataset for Bemba. While Bemba is the most populous language of Zambia, it exhibits a dearth of resources which render the development of language technologies or language processing research almost impossible. The dataset is comprised of multi-turn dialogues between Bemba speakers based on images, transcribed and translated into English. There are more than 92,000 utterances/sentences, amounting to more than 180 hours of audio data with corresponding transcriptions and English translations. We also provide baselines on speech recognition (ASR), machine translation (MT) and speech translation (ST) tasks, and sketch out other potential future multimodal uses of our dataset. We hope that by making the dataset available to the research community, this work will foster research and encourage collaboration across the language, speech, and vision communities especially for languages outside the {``}traditionally{''} used high-resourced ones. All data and code are publicly available: [\url{https://github.com/csikasote/bigc}](\url{https://github.com/csikasote/bigc}).",
  }