Datasets:

Modalities:
Audio
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
amupd's picture
Upload dataset
74eea8b verified
metadata
license: mit
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: augment
        path: data/augment-*
      - split: dev
        path: data/dev-*
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: transcription
      dtype: string
  splits:
    - name: train
      num_bytes: 8074963256.577805
      num_examples: 51517
    - name: augment
      num_bytes: 5465107591.698363
      num_examples: 6087
    - name: dev
      num_bytes: 131522800.77089266
      num_examples: 1580
  download_size: 16938027526
  dataset_size: 13671593649.047062

For training and developing your models in the closed track, we provide the following datasets, which are publicly available on Hugging Face: The datasets represent a wide range of Arabic varieties and recording conditions, with over 85K training sentences in total. The datasets consist of dialectal, modern standard, classical, and code-switched Arabic speech and transcriptions. All except the Mixat and ArzEn subset are diacritized.

Dataset Type Diacritized Train Dev
MDASPC Multi-dialectal True 60677 >1K
TunSwitch Dialectal, CS True 5212 165
ClArTTS CA True 9500 205
ArVoice MSA True 2507
ArzEn Dialectal, CS False 3344
Mixat Dialectal, CS False 3721

We removed samples containing fewer than 3 words and eliminated punctuations from all datasets to enhance consistency and quality. The resulted dataset contains 57K train and 1.5K for dev samples.

For the closed track, you may use the full train/dev sets or a subset of them (for example, you may wish to use the undiacritized subsets for semi-supervised training or rely only on the diacritized subsets). For the open track, you can use these resources and/or any other resources for training, as long as they don't overlap with the test sets.