You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

AVE Speech: A Comprehensive Multi-Modal Dataset for Speech Recognition Integrating Audio, Visual, and Electromyographic Signals

Multi-modal speech recognition

Abstract

The global aging population faces considerable challenges, particularly in communication, due to the prevalence of hearing and speech impairments. To address these, we introduce the AVE speech, a comprehensive multi-modal dataset for speech recognition tasks. The dataset includes a 100-sentence Mandarin corpus with audio signals, lip-region video recordings, and six-channel electromyography (EMG) data, collected from 100 participants. Each subject read the entire corpus ten times, with each sentence averaging approximately two seconds in duration, resulting in over 55 hours of multi-modal speech data per modality. Experiments demonstrate that combining these modalities significantly improves recognition performance, particularly in cross-subject and high-noise environments. To our knowledge, this is the first publicly available sentence-level dataset integrating these three modalities for large-scale Mandarin speech recognition. We expect this dataset to drive advancements in both acoustic and non-acoustic speech recognition research, enhancing cross-modal learning and human-machine interaction.

About the Dataset

The AVE Speech Dataset includes a 100-sentence Mandarin Chinese corpus with audio signals, lip-region video recordings, and six-channel electromyography (EMG) data, collected from 100 participants. Each subject read the entire corpus ten times, with each sentence averaging approximately two seconds in duration, resulting in over 55 hours of multi-modal speech data per modality. It will be made publicly available once the related paper has been accepted for publication.

The related source code is available at: 👉 AVE-Speech Code on Github

Corpus Design

Index Chinese Sentence Phonetic Transcription (Mandarin) Tone English Translation
#0 我饿了 wo e le 3 4 5 I'm hungry
#1 我口渴 wo kou ke 3 3 3 I'm thirsty
#2 我吃饱了 wo chi bao le 3 1 3 5 I'm full
#3 水太烫了 shui tai tang le 3 4 4 5 The water is too hot
#4 我太累了 wo tai lei le 3 4 4 5 I'm too tired
... ... ... ... ...
#99 向右转 xiang you zhuan 4 4 3 Turn right
#100 (无指令) None None None

For more details, please refer to the file phonetic_transcription.xlsx.

Usage

Each ZIP file, when extracted, contains sessions numbered from 1 to 10. In a few rare cases, some sessions may be missing. Within each session, there are multiple files, each named according to the index found in the phonetic_transcription.xlsx file, corresponding to a specific Chinese sentence.

Citation

If you use this dataset in your work, please cite it as:

@article{zhou2025ave,
  title={AVE Speech: A Comprehensive Multi-Modal Dataset for Speech Recognition Integrating Audio, Visual, and Electromyographic Signals},
  author={Zhou, Dongliang and Zhang, Yakun and Wu, Jinghan and Zhang, Xingyu and Xie, Liang and Yin, Erwei},
  journal={IEEE Transactions on Human-Machine Systems},
  year={2025}
}
Downloads last month
31