File size: 3,589 Bytes
0597d3d 44bf5df 0597d3d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
# PianoBART
The description is generated by Grok3.
## Model Details
- **Model Name**: PianoBART
- **Model Type**: Transformer-based model (BART architecture) for symbolic piano music generation and understanding
- **Version**: 1.0
- **Release Date**: August 2025
- **Developers**: Zijian Zhao, Weichao Zeng, Yutong He, Fupeng He, Yiyi Wang
- **Organization**: SYSU
- **License**: Apache License 2.0
- **Paper**: [PianoBART: Symbolic Piano Music Generation and Understanding with Large-Scale Pre-Training](https://ieeexplore.ieee.org/document/10688332), ICME 2024
- **Arxiv**: https://arxiv.org/abs/2407.03361
- Citation:
```
@INPROCEEDINGS{10688332,
author={Liang, Xiao and Zhao, Zijian and Zeng, Weichao and He, Yutong and He, Fupeng and Wang, Yiyi and Gao, Chengying},
booktitle={2024 IEEE International Conference on Multimedia and Expo (ICME)},
title={PianoBART: Symbolic Piano Music Generation and Understanding with Large-Scale Pre-Training},
year={2024},
volume={},
number={},
pages={1-6},
doi={10.1109/ICME57554.2024.10688332}
}
```
- **Contact**: [email protected]
- **Repository**: https://github.com/RS2002/PianoBart
## Model Description
PianoBART is a transformer-based model built on the Bidirectional and Auto-Regressive Transformers (BART) architecture, designed for symbolic piano music generation and understanding. It leverages large-scale pre-training to perform tasks such as music generation, composer classification, emotion classification, velocity prediction, and melody prediction. The model processes symbolic music data in an octuple format and is inspired by frameworks like [MusicBERT](https://github.com/microsoft/muzic/tree/main/musicbert) and [MidiBERT-Piano](https://github.com/wazenmai/MIDI-BERT).
- **Architecture**: BART (encoder-decoder transformer)
- **Input Format**: Octuple representation of symbolic music (batch_size, sequence_length, 8) for both encoder and decoder
- **Output Format**: Hidden states of dimension [batch_size, sequence_length, 1024]
- **Hidden Size**: 1024
- **Training Objective**: Pre-training with large-scale datasets followed by task-specific fine-tuning
- **Tasks Supported**: Music generation, composer classification, emotion classification, velocity prediction, melody prediction
## Training Data
The model was pre-trained and fine-tuned on the following datasets:
- **Pre-training**: POP1K7, ASAP, POP909, Pianist8, EMOPIA
- **Generation**: Maestro, GiantMIDI
- **Composer Classification**: ASAP, Pianist8
- **Emotion Classification**: EMOPIA
- **Velocity Prediction**: GiantMIDI
- **Melody Prediction**: POP909
For dataset preprocessing and organization, refer to the [MusicBERT](https://github.com/microsoft/muzic/tree/main/musicbert) and [MidiBERT-Piano](https://github.com/wazenmai/MIDI-BERT) repositories.
## Usage
### Installation
```shell
git clone https://huggingface.co/RS2002/PianoBART
```
Please ensure that the `model.py` and `Octuple.pkl` files are located in the same folder.
### Example Code
```python
import torch
from model import PianoBART
# Load the model
model = PianoBART.from_pretrained("RS2002/PianoBART")
# Example input
input_ids_encoder = torch.randint(1, 10, (2, 1024, 8))
input_ids_decoder = torch.randint(1, 10, (2, 1024, 8))
encoder_attention_mask = torch.zeros((2, 1024))
decoder_attention_mask = torch.zeros((2, 1024))
# Forward pass
output = model(input_ids_encoder, input_ids_decoder, encoder_attention_mask, decoder_attention_mask)
print(output.last_hidden_state.size()) # Output: [2, 1024, 1024]
``` |