The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
RAVDESS Preprocessed Dataset
This dataset contains preprocessed data from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, specifically processed for cross-modal knowledge distillation research.
This preprocessing work is described in our paper: MST-Distill: Mixture of Specialized Teachers for Cross-Modal Knowledge Distillation Code: https://github.com/Gray-OREO/MST-Distill
Original Dataset
The original RAVDESS dataset is available at: https://zenodo.org/records/1188976
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 7356 files (total size: 24.8 GB). The dataset contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent.
Dataset Information
- Actors: 24 professional actors (12 female, 12 male)
- Emotions: 8 emotions (neutral, calm, happy, sad, angry, fearful, disgust, surprised)
- Modalities: Audio-visual, audio-only, and video-only formats
- Statements: Two lexically-matched statements in neutral North American accent
- Content: Speech only (song content is not included in this preprocessed dataset)
Preprocessing Details
We have performed normalization preprocessing on the speech portion of the RAVDESS dataset for use in our cross-modal knowledge distillation work. The preprocessing focuses exclusively on the speech content (vocal channel 01) and does not include song data (vocal channel 02). The preprocessing consists of three main steps:
1. Audio Preprocessing (preprocess_RAVDESS_0.py
)
- Input: Speech audio files only (vocal channel 01)
- Target duration: 3.6 seconds
- Sampling rate: 22,050 Hz
- Padding: Zero-padding for audio shorter than target duration
- Cropping: Equal cropping from both sides for audio longer than target duration
- Output format: WAV files with suffix
_croppad.wav
2. Video Preprocessing (preprocess_RAVDESS_1.py
)
- Input: Speech video files only (modality 01, vocal channel 01)
- Face detection: Using MTCNN (Multi-task Cascaded Convolutional Networks)
- Frame selection: 15 frames selected from 3.6 seconds of video (distributed selection)
- Face cropping: Automatic face detection and cropping
- Resize: All face regions resized to 224×224 pixels
- Output format: NumPy arrays saved as
.npy
files with suffix_facecroppad.npy
3. Final Data Preparation (preprocess_RAVDESS_2.py
)
- Audio features: MFCC (Mel-frequency cepstral coefficients) with 15 coefficients
- Video data: Processed face-cropped video frames
- Label encoding: Emotion labels converted to integer indices (0-7)
- Output files:
video_data.npy
: Processed video data [N, 15, 224, 224, 3]audio_data.npy
: MFCC features [N, 15, time_steps]label_data.npy
: Emotion labels [N]
Data Structure
The preprocessed dataset contains:
- Video data: Face-cropped video frames with shape [N, 15, 224, 224, 3]
- Audio data: MFCC features extracted from normalized audio
- Labels: Emotion categories (0: neutral, 1: calm, 2: happy, 3: sad, 4: angry, 5: fearful, 6: disgust, 7: surprised)
Usage
import numpy as np
# Load preprocessed data
video_data = np.load('video_data.npy')
audio_data = np.load('audio_data.npy')
label_data = np.load('label_data.npy')
print(f"Video data shape: {video_data.shape}")
print(f"Audio data shape: {audio_data.shape}")
print(f"Label data shape: {label_data.shape}")
License
The RAVDESS is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, CC BY-NC-SA 4.0
This preprocessed dataset maintains the same license as the original RAVDESS dataset.
Citation
If you use this preprocessed dataset in your research, please cite both the original RAVDESS paper and acknowledge the preprocessing:
Original RAVDESS Citation:
@article{livingstone2018ravdess,
title={The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English},
author={Livingstone, Steven R and Russo, Frank A},
journal={PLoS ONE},
volume={13},
number={5},
pages={e0196391},
year={2018},
publisher={Public Library of Science},
doi={10.1371/journal.pone.0196391}
}
Zenodo Citation:
@misc{livingstone2018zenodo,
title={The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS)},
author={Livingstone, Steven R and Russo, Frank A},
year={2018},
publisher={Zenodo},
doi={10.5281/zenodo.1188976},
url={https://zenodo.org/records/1188976}
}
Acknowledgments
We thank the original authors of the RAVDESS dataset for making this valuable resource available to the research community. The original dataset was created by Steven R. Livingstone and Frank A. Russo at Ryerson University.
Contact
For questions about the original RAVDESS dataset, please contact the original authors: [email protected]
For questions about this preprocessed version, please refer to our paper or create an issue in the repository.
Note to original authors: If you have any concerns or objections regarding this preprocessed dataset, please contact us and we will promptly remove it.
- Downloads last month
- 164