Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
532
2.06k

🏆 News: Our OWSM v4 paper won the Best Student Paper Award at INTERSPEECH 2025!

Dataset Card for YODAS_OWSMv4

Dataset Description

Open Whisper-style Speech Model (OWSM) is the first fully open Whisper-style speech foundation model. It reproduces and advances OpenAI's Whisper-style training using publicly available data and open-source toolkits. The code, pre-trained model weights, and training logs are publicly released to promote open science in speech foundation models.

This repo contains the newly curated training data for OWSM v4, the latest version in the OWSM series. This dataset is a high-quality subset of YODAS2, comprising 166,000 hours of multilingual speech spanning 75 languages. Utterances are segmented into up to 30 seconds to be consistent with OWSM training.

Dataset Creation

Due to the nature of web-sourced data, the original YODAS2 dataset contains inaccurate language labels and misaligned audio-text pairs. Our preliminary experiments suggest that such noise hurts the performance of downstream ASR models. To address this, we developed a scalable data-cleaning pipeline using publicly available toolkits, resulting in a curated subset of the original dataset. This cleaned dataset forms a core part of the training data for our OWSM v4 models, which, when combined with existing OWSM data, significantly outperform previous versions on multilingual benchmarks.

The data cleaning process consists of three stages.

YODAS Data Cleaning Procedure

Stage 1: Resegmentation (Section 2.1.1 in Paper)

YODAS provides unsegmented long-form recordings, each of which is accompanied by a list of text transcriptions annotated with start and end timestamps. However, some timestamps are inaccurate. Consequently, our first step is to realign the audio and text using the CTC segmentation algorithm.

Stage 2: LID-based filtering (Section 2.1.2 in Paper)

Some utterances have incorrect language labels. We perform language identification on both audio and text using public models. Then, we remove utterances where the language label does not match the identified language from either audio or text.

Stage 3: CTC-score-based filtering (Section 2.1.3 in Paper)

The CTC segmentation algorithm in Stage 1 assigns a confidence score to each utterance, which measures the speech-text alignment quality. We filter out utterances with low CTC scores. The CTC confidence score is language-dependent; therefore, we rank the scores of short utterances within each language and select a relative threshold (quantile).

⚠️ Please read the following note before using the data.

In this repo, we release two subsets:

  • 😞 [Low quality, not recommended] dump/raw/yodas0.00: The filtering threshold is 0.00, i.e., no filtering based on CTC score. This subset contains all data at the end of Stage 2.
    • ⚠️ This subset has low-quality data that hurts ASR performance, as shown in Table 2 of our paper. It is NOT recommended to use unless further filtering is performed.
  • 😊 [Good quality, recommended] dump/raw/yodas0.10: The filtering threshold is 0.10. This is the actual training data used to train OWSM v4. Detailed statistics can be found in dump/raw/yodas0.10/stats.txt

Dataset Usage

This dataset follows the ESPnet OWSM data format as described in ESPnet s2t1 recipe.

Structure

Each subset contains the following files:

  • text: text transcriptions with utterance-level timestamps: uttid <language><asr><start_time1> sentence1<end_time1><start_time2> sentence2<end_time2>...
  • text.ctc: text transcriptions without timestamps: uttid sentence1 sentence2 ...
  • text.prev: text transcriptions of the previous sentence in the same recording without timestamps: uttid previous_sentences
  • wav.scp: paths to audio files: uttid audio_path. The audio files are saved in Kaldi ark format. kaldiio can be used to read the audio array and sample rate.

Utterance IDs follow this format: <recording_id>_<start_time>_<end_time>_<language>_asr, where <recording_id> is the original ID in YODAS2. Therefore, this dataset can be mapped to the original dataset if needed.


OWSM v4 Results

Our OWSM v4 models, trained on this curated dataset alongside existing OWSM data, significantly outperform previous versions on multilingual benchmarks. They even match or surpass frontier industrial models in multiple scenarios. Please refer to our paper for comprehensive evaluations. Below, we highlight some notable results.

Language Identification

Language identification accuracy

English ASR

English ASR WER vs inference speed

Multilingual ASR

Multilingual ASR WER

Pre-trained Models

Encoder-decoder OWSM

CTC-based OWSM

Name Size Hugging Face Repo
OWSM-CTC v4 medium 1.01B https://huggingface.co/espnet/owsm_ctc_v4_1B

Citation

@inproceedings{owsm-v4,
  title={{OWSM} v4: Improving Open Whisper-Style Speech Models via Data Scaling and Cleaning},
  author={Yifan Peng and Shakeel Muhammad and Yui Sudo and William Chen and Jinchuan Tian and Chyi-Jiunn Lin and Shinji Watanabe},
  booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
  year={2025},
}
Downloads last month
24,292

Models trained or fine-tuned on espnet/yodas_owsmv4