|
|
---
|
|
|
license: cc-by-nc-sa-4.0
|
|
|
---
|
|
|
|
|
|
# PianoVAM: A Multimodal Dataset for Piano Music Transcription
|
|
|
|
|
|
## Summary
|
|
|
|
|
|
This repository contains the **PianoVAM (Visual and Audio Music Performance)** dataset, a multi-modal collection of piano performances designed for research in Music Information Retrieval (MIR).
|
|
|
|
|
|
The dataset features synchronized recordings of various piano pieces, providing rich data across several modalities. Our goal is to provide a comprehensive resource for developing and evaluating models that can understand the complex relationship between the visual, auditory, and symbolic aspects of music performance.
|
|
|
|
|
|
## How to Cite
|
|
|
|
|
|
If you use the PianoVAM dataset in your research, please cite it as follows:
|
|
|
|
|
|
```bibtex
|
|
|
@inproceedings{kim2025pianovam,
|
|
|
title={PianoVAM: A Multimodal Piano Performance Dataset},
|
|
|
author={Kim, Yonghyun and Park, Junhyung and Bae, Joonhyung and Kim, Kirak and Kwon, Taegyun and Lerch, Alexander and Nam, Juhan},
|
|
|
booktitle={Proceedings of the 26th International Society for Music Information Retrieval Conference (ISMIR)},
|
|
|
year={2025}
|
|
|
}
|
|
|
```
|
|
|
|
|
|
## Dataset Description
|
|
|
|
|
|
The dataset consists of various piano pieces performed by multiple pianists. The data was captured simultaneously from a digital piano and high-resolution cameras to ensure precise synchronization between the audio, video, and MIDI streams. The collection is designed to cover a range of musical complexities and styles.
|
|
|
|
|
|
## Directory Structure
|
|
|
|
|
|
The dataset is organized into the following directories based on data modality:
|
|
|
|
|
|
```
|
|
|
PianoVAM_v1.0/
|
|
|
βββ Audio/
|
|
|
βββ Handskeleton/
|
|
|
βββ MIDI/
|
|
|
βββ TSV/
|
|
|
βββ metadata.json
|
|
|
βββ README.md
|
|
|
βββ Video/ (Coming Soon)
|
|
|
βββ Fingering/ (Coming Soon)
|
|
|
```
|
|
|
|
|
|
### Folder Contents
|
|
|
|
|
|
* **`Audio/`**: Contains the raw audio recordings of the piano performances.
|
|
|
|
|
|
* **Format:** Uncompressed WAV (`.wav`).
|
|
|
* **Sample Rate:** 44100 Hz.
|
|
|
|
|
|
* **`Handskeleton/`**: Contains the 3D hand landmark data for each performance.
|
|
|
|
|
|
* **Format:** JSON (`.json`) files.
|
|
|
* **Details:** Each file contains frame-by-frame coordinates for the 21 keypoints of both the left and right hands, as captured by MediaPipe Hands.
|
|
|
|
|
|
* **`MIDI/`**: Contains the ground truth performance data recorded directly from a digital piano.
|
|
|
|
|
|
* **Format:** Standard MIDI Files (`.mid`).
|
|
|
* **Details:** These files provide the precise timing (onset, offset), pitch, and velocity for every note played.
|
|
|
|
|
|
* **`metadata.json`**: A JSON file that maps each recording to its corresponding data split (`train`, `valid`, `test`, `special`) and provides other relevant information.
|
|
|
```json
|
|
|
{
|
|
|
"0": {
|
|
|
"record_time": "2024-02-14_19-10-09",
|
|
|
"split": "train",
|
|
|
"composer": "E. Grieg",
|
|
|
"piece": "Piano Concerto",
|
|
|
"performance_method": "Solo",
|
|
|
"performance_type": "DailyPractice",
|
|
|
"duration": "0 days 00:12:25.493.000000",
|
|
|
"P1_name": "Yonghyun",
|
|
|
"P1_gender": "Male",
|
|
|
"P1_age": "20-29",
|
|
|
"P1_skill": "Advanced",
|
|
|
"P1_musicmajor": "No",
|
|
|
"P2_name": null,
|
|
|
"P2_gender": null,
|
|
|
"P2_age": null,
|
|
|
"P2_skill": null,
|
|
|
"P2_musicmajor": null,
|
|
|
"Point_LT": "121, 355",
|
|
|
"Point_RT": "1839, 345",
|
|
|
"Point_RB": "1839, 558",
|
|
|
"Point_LB": "120, 564"
|
|
|
},
|
|
|
...
|
|
|
}
|
|
|
```
|
|
|
|
|
|
* **`TSV/`**: Contains pre-processed label data derived from the MIDI files for easier parsing. Each file is a tab-separated value file with 5 columns.
|
|
|
|
|
|
* **Format:** TSV (`.tsv`).
|
|
|
* **Header:** `onset`, `key_offset`, `frame_offset`, `note`, `velocity`
|
|
|
* **Example:**
|
|
|
```
|
|
|
onset key_offset frame_offset note velocity
|
|
|
6.684375 6.740625 6.740625 105 92
|
|
|
6.685417 6.735417 6.735417 96 87
|
|
|
```
|
|
|
* **Column Descriptions:**
|
|
|
* **`onset`**: The start time of the note in seconds.
|
|
|
* **`key_offset`**: The time when the finger is **physically released** from the key, in seconds. This is useful for video-based research such as fingering analysis.
|
|
|
* **`frame_offset`**: The time when the sound **completely ends**, considering pedal usage. This is analogous to the 'offset' used in traditional audio-only transcription.
|
|
|
* **`note`**: The MIDI note number (pitch).
|
|
|
* **`velocity`**: The MIDI velocity (how hard the key was struck).
|
|
|
|
|
|
### Coming Soon
|
|
|
|
|
|
* **`Video/`**: (Coming Soon) This directory will contain the video recordings of the performances. The videos will be synchronized with the audio and MIDI data. The planned format is HDF5 (`.h5`) containing frame-by-frame image data.
|
|
|
|
|
|
* **`Fingering/`**: (Coming Soon) This directory will contain detailed frame-by-frame fingering annotations for each note played.
|
|
|
|
|
|
## License
|
|
|
|
|
|
This dataset is licensed under the **Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)**. You are free to share and adapt the material for non-commercial purposes, provided you give appropriate credit and distribute your contributions under the same license.
|
|
|
|
|
|
## Contact
|
|
|
|
|
|
For any questions about the dataset, please open an issue in the Community tab of this repository or contact [Your Name/Email].
|
|
|
|