|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- projectlosangeles/Godzilla-MIDI-Dataset |
|
language: |
|
- en |
|
tags: |
|
- Orpheus |
|
- MIDI |
|
- music-ai |
|
- music-transformer |
|
- SOTA |
|
- multi-instrumental |
|
- music |
|
--- |
|
|
|
# Orpheus Music Transformer |
|
## SOTA 8k multi-instrumental music transformer trained on 2.31M+ high-quality MIDIs |
|
|
|
 |
|
|
|
*** |
|
|
|
## Abstract |
|
|
|
### Project Los Angeles is very proud to present **Orpheus Music Transformer**, an efficient, SOTA transformer model for long-form, multi-instrumental music generation. At its core lies a 479 M-parameter autoregressive transformer equipped with Rotary Positional Embeddings (RoPE) and Flash Attention, enabling sequence lengths up to 8 k tokens—sufficient to capture extended musical structures. Trained for three epochs on 2.31 million high-quality MIDI tracks from the Godzilla dataset, our model employs a compact 3-token-per-note and 7-token-per-tri-chord encoding, plus a novel duration-and-velocity-last ordering to enhance expressivity. We leverage PyTorch’s bfloat16 precision and memory-efficient sparse-dense products for accelerated inference on CUDA, and provide a top-*p* sampling filter with adjustable temperature. |
|
|
|
### The Gradio interface empowers users to upload seed MIDI files or generate from scratch, tune prime/generation token counts, control randomness (temperature, top-*p*), and optionally append drums or natural “outro” tokens. Generated outputs appear in ten parallel batches with synchronized audio previews and piano-roll plots. Users can iteratively add or remove entire batches to sculpt a final composition, which is rendered back into MIDI and audio via an integrated SoundFont pipeline. Our release demonstrates a seamless blend of state-of-the-art model performance, efficient MIDI tokenization, and user-centric design, fostering rapid exploration of algorithmic composition. |
|
|
|
*** |
|
|
|
## Models |
|
|
|
#### Presented are two models: |
|
|
|
### **[Orpheus Music Transformer Model](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/Orpheus_Music_Transformer_Trained_Model_92832_steps_0.7028_loss_0.7979_acc.pth)** |
|
#### This is a base model that is capable of music generation/continuation and notes/drums inpainting |
|
|
|
### **[Orpheus Bridge Music Transformer Model](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/Orpheus_Bridge_Music_Transformer_Trained_Model_19571_steps_0.9396_loss_0.7365_acc.pth)** |
|
#### This is an auxiliary model that is capable of seamless bridge inpainting/infilling in any music composition |
|
|
|
*** |
|
|
|
## Live Hugging Face spaces demos |
|
|
|
### **[Orpheus Music Transformer](https://huggingface.co/collections/asigalov61/orpheus-music-transformer-685c3c8e59ed1414c02bb8cd)** |
|
|
|
#### If you enjoyed any of the Orpheus Music Transformer demos, please star and duplicate. It helps a lot! 🤗 |
|
|
|
*** |
|
|
|
## Training dataset code |
|
|
|
### Models were trained on select HQ MIDIs from [Godzilla MIDI Dataset](https://huggingface.co/datasets/projectlosangeles/Godzilla-MIDI-Dataset) |
|
### Please check out [Orpheus Taining Dataset Maker](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/training_data/README.md) notebook for details |
|
|
|
*** |
|
|
|
## Models training code |
|
|
|
### Please check out [Orpheus Music Transformer Maker](https://huggingface.co/asigalov61/Orpheus-Music-Transformer/blob/main/training_code/README.md) code/notebook for details |
|
|
|
*** |
|
|
|
### Project Los Angeles |
|
### Tegridy Code 2025 |