Datasets:
NVSpeech Dataset
Overview
The NVSpeech dataset provides extensive annotations of paralinguistic vocalizations for Mandarin Chinese speech, aimed at enhancing the capabilities of automatic speech recognition (ASR) and text-to-speech (TTS) systems. The dataset features explicit word-level annotations for 18 categories of paralinguistic vocalizations, including non-verbal sounds like laughter and breathing, as well as lexicalized interjections like "uhm" and "oh."
Annotation Categories
The NVSpeech dataset includes annotations for the following paralinguistic vocalization categories:
- [Breathing]
- [Laughter]
- [Cough]
- [Sigh]
- [Confirmation-en]
- [Question-en]
- [Question-ah]
- [Question-oh]
- [Surprise-ah]
- [Surprise-oh]
- [Dissatisfaction-hnn]
- [Uhm]
- [Shh]
- [Crying]
- [Surprise-wa]
- [Surprise-yo]
- [Question-ei]
- [Question-yi]
Usage
from datasets import load_dataset
dataset = load_dataset("Hannie0813/NVSpeech170k")
Intended Use
NVSpeech is designed to facilitate:
- Training and evaluation of paralinguistic-aware speech recognition models.
- Development of expressive and controllable TTS systems that can accurately synthesize human-like speech with inline paralinguistic cues.
Tasks
- Automatic Speech Recognition (ASR)
- Text-to-Speech (TTS) Synthesis
- Paralinguistic Tagging
Languages
- Mandarin Chinese
Dataset Structure
- Format: Audio (WAV format) paired with text annotations including inline paralinguistic tokens.
- Size: 174,179 automatically annotated utterances, totaling over 573 hours.
License
NVSpeech dataset is available for research use under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) license.
Citation
If you use NVSpeech in your research, please cite:
Contact
For further questions, please visit the project webpage or contact the authors through the provided channels.
- Downloads last month
- 58