|
--- |
|
license: cc-by-4.0 |
|
language: |
|
- en |
|
--- |
|
|
|
# Dataset Card for youtube-commons-asr-eval |
|
|
|
## Table of Contents |
|
- [Table of Contents](#table-of-contents) |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Fields](#data-fields) |
|
- [Data Splits](#data-splits) |
|
- [Dataset Creation](#dataset-creation) |
|
|
|
- [Additional Information](#additional-information) |
|
- [Licensing Information](#licensing-information) |
|
|
|
|
|
### Dataset Summary |
|
|
|
This evaluation dataset is created from a subset of Youtube-Commons [PleIAs/YouTube-Commons] by selecting English YouTube videos and corresponding english subtitle. |
|
### Supported Tasks and Leaderboards |
|
|
|
This dataset will be primarily useful for automatic speech recognition evaluation tasks such as hf-audio/open_asr_leaderboard. |
|
|
|
### Languages |
|
|
|
This subset is for English language evaluations. |
|
|
|
## Dataset Structure |
|
|
|
The dataset consists of 94 video links, transcriptions, and normalized transcriptions (around 38 hours) of age-appropriate audios with a minimum word count of 300. With a normal speaking rate of 2.5 words per second, this corresponds to a minimum duration of 2 minutes. Minimum duration of the dataset is 128 seconds and maximum is 02:08 hours. The average duration per file is a little over 24 minutes and the standard deviation is 25 minutes. The notable variability in audio duration, as indicated by the standard deviation, mirrors typical real-time environments. |
|
|
|
|
|
### Data Fields |
|
|
|
Each row in the JSON file has link (link to the youtube video), text (transcription), norm_text (normalized transcription) and duration (duration of the video) fields. |
|
### Data Splits |
|
|
|
Evaluation data |
|
## Dataset Creation |
|
|
|
Normalization is done via EnglishTextNormalizer from open_asr_eval [https://github.com/huggingface/open_asr_leaderboard/blob/main/normalizer/normalizer.py] |
|
The dataset is created by selecting the first 100 files from Youtube-Commons, with a minimum of 300 transcription words and age-appropriate content. Three files are manually removed owing to high errors in the transcription observed in visual inspection and also verified with high WER on different ASR implementations. |
|
|
|
### Licensing Information |
|
|
|
All the transcripts are part of a video shared under a CC-By license on YouTube. All the licensing terms are the same as the original dataset [PleIAs/YouTube-Commons]. |
|
|