jeli-asr / README.md
diarray's picture
Publishing Jeli-ASR version 1.0.1
5f61319
|
raw
history blame
6.9 kB
metadata
language:
  - bm
  - fr
pretty_name: Jeli-ASR Audio Dataset
version: 1.0.1
tags:
  - audio
  - transcription
  - multilingual
  - Bambara
  - French
license: cc-by-4.0
task_categories:
  - automatic-speech-recognition
  - text-to-speech
  - translation
task_ids:
  - audio-language-identification
  - keyword-spotting
annotations_creators:
  - semi-expert
language_creators:
  - crowdsourced
source_datasets:
  - jeli-asr
size_categories:
  - 10GB<
  - 10K<n<100K
dataset_info:
  audio_format: arrow
  features:
    - name: audio
      dtype: audio
    - name: duration
      dtype: float
    - name: bam
      dtype: string
    - name: french
      dtype: string
  total_audio_files: 33643
  total_duration_hours: ~32
configs:
  - config_name: jeli-asr-rmai
    data_files:
      - split: train
        path: jeli-asr-rmai/train/data-*.arrow
      - split: test
        path: jeli-asr-rmai/test/data-*.arrow
  - config_name: bam-asr-oza
    data_files:
      - split: train
        path: bam-asr-oza/train/data-*.arrow
      - split: test
        path: bam-asr-oza/test/data-*.arrow
  - config_name: jeli-asr
    default: true
    data_files:
      - split: train
        path:
          - jeli-asr-rmai/train/data-*.arrow
          - bam-asr-oza/train/data-*.arrow
      - split: test
        path:
          - jeli-asr-rmai/test/data-*.arrow
          - bam-asr-oza/test/data-*.arrow
description: >
  The **Jeli-ASR Audio Dataset** is a multilingual dataset converted into the
  optimized Arrow format, 

  ensuring fast access and compatibility with modern data workflows. It contains
  audio samples in Bambara 

  with semi-expert transcriptions and French translations. Each subset of the
  dataset is organized by 

  configuration (`jeli-asr-rmai`, `bam-asr-oza`, and `jeli-asr`) and further
  split into training and testing sets. 

  The dataset is designed for tasks like automatic speech recognition (ASR),
  text-to-speech synthesis (TTS), 

  and translation. Data was recorded in Mali with griots, then transcribed and
  translated into French.

Jeli-ASR Dataset

This repository contains the Jeli-ASR dataset, which is primarily a reviewed version of Aboubacar Ouattara's Bambara-ASR dataset (drawn from jeli-asr and available at oza75/bambara-asr) combined with the best data retained from the former version: jeli-data-manifest. This dataset features improved data quality for automatic speech recognition (ASR) and translation tasks, with variable length Bambara audio samples, Bambara transcriptions and French translations.

Important Notes

  1. Please note that this dataset is currently in development and is therefore not fixed. The structure, content, and availability of the dataset may change as improvements and updates are made.

Key Changes in Version 1.0.1

Jeli-ASR 1.0.1 introduces several updates and enhancements, focused entirely on the transcription side of the dataset. There have been no changes to the audio files since version 1.0.0. Below are the key updates:

  1. Symbol Removal:
    All non-vocabulary symbols deemed unnecessary for Automatic Speech Recognition (ASR) were removed, including:
    [ ] ( ) « » ° " < >

  2. Punctuation Removal:
    Common punctuation marks were removed to streamline the dataset for ASR use cases. These include:
    : , ; . ? !
    The exception is the hyphen (-), which remains as it is used in both Bambara and French compound words. While this punctuation removal enhances ASR performance, the previous version with full punctuation may still be better suited for other applications. You can still reconstruct the previous version with the archives.

  3. Bambara Normalization:
    The transcription were normalized using the Bambara Normalizer, a python package designed to normalize Bambara text for different NLP applications.

  4. Optimized Data Format:
    This version introduces .arrow files for efficient data storage and retrieval and compatibility with HuggingFace tools.

Let us know if you have feedback or additional use suggestions for the dataset by opening a discussion or a pull request. You can find a record or updates of the dataset in VERSIONING.md


Dataset Details

  • Total Duration: 32.48 hours
  • Number of Samples: 33,643
    • Training Set: 32,180 samples (~95%)
    • Testing Set: 1,463 samples (~5%)

Subsets:

  • Oza's Bambara-ASR: ~29 hours (clean subset).
  • Jeli-ASR-RMAI: ~3.5 hours (filtered subset).

Note that since the two subsets were drawn from the original Jeli-ASR dataset, they are just different variation of the same data.


Usage

The manifest files are specifically created for training Automatic Speech Recognition (ASR) models in NVIDIA NeMo framework, but they can be used with any other framework that supports manifest-based input formats or reformatted for other use cases.

To use the dataset, simply load the manifest files (train-manifest.json and test-manifest.json) in your training script. The file paths for the audio files and the corresponding transcriptions are already provided in these manifest files.

Downloading the Dataset:


# Clone dataset repository maintaining directory structure for quick setup with Nemo
git clone --depth 1 https://huggingface.co/datasets/RobotsMali/jeli-asr

OR


from datasets import load_dataset

# Load the dataset into Hugging Face Dataset object
dataset = load_dataset("RobotsMali/jeli-asr")

Finetuning Example in NeMo:

from nemo.collectisr.models import ASRModel
train_manifest = 'jeli-asr/manifests/train-manifest.json'
test_manifest = 'jeli-asr/manifests/test-manifest.json'

asr_model = ASRModel.from_pretrained("QuartzNet15x5Base-En")

# Adapt the model's vocab before training
asr_model.setup_training_data(train_data_config={'manifest_filepath': train_manifest})
asr_model.setup_validation_data(val_data_config={'manifest_filepath': test_manifest})

Known Issues

While significantly improved, this dataset may still contain a few Slightly misaligned samples. It has conserved most of the issues of the original dataset such as: 

  • Inconsistent transcriptions
  • Non-standardized naming conventions.
  • Language and spelling issues
  • Inaccurate translations

Citation

If you use this dataset in your research or project, please credit the creators of the original datasets.