File size: 2,991 Bytes
f9623d5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
# Skip-BART
The description is generated by Grok3.
## Model Details
- **Model Name**: Skip-BART
- **Model Type**: Transformer-based model (BART architecture) for automatic stage lighting control
- **Version**: 1.0
- **Release Date**: August 2025
- **Developers**: Zijian Zhao, Dian Jin
- **Organization**: HKUST, PolyU
- **License**: Apache License 2.0
- **Paper**: [Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?](https://arxiv.org/abs/2506.01482)
- **Citation:**
```
@article{zhao2025automatic,
title={Automatic Stage Lighting Control: Is it a Rule-Driven Process or Generative Task?},
author={Zhao, Zijian and Jin, Dian and Zhou, Zijing and Zhang, Xiaoyu},
journal={arXiv preprint arXiv:2506.01482},
year={2025}
}
```
- **Contact**: [email protected]
- **Repository**: https://github.com/RS2002/Skip-BART
## Model Description
Skip-BART is a transformer-based model built on the Bidirectional and Auto-Regressive Transformers (BART) architecture, designed for automatic stage lighting control. It generates lighting sequences synchronized with music input, treating stage lighting as a generative task. The model processes music data in an octuple format and outputs lighting control parameters, leveraging a skip-connection-enhanced BART structure for improved performance.
- **Architecture**: BART with skip connections
- **Input Format**: Encoder input (batch_size, length, 512), decoder input (batch_size, length, 2), attention masks (batch_size, length)
- **Output Format**: Hidden states of dimension [batch_size, length, 1024]
- **Hidden Size**: 1024
- **Training Objective**: Pre-training on music data, followed by fine-tuning for lighting sequence generation
- **Tasks Supported**: Stage lighting sequence generation
## Training Data
The model was trained on the **RPMC-L2** dataset:
- **Dataset Source**: [RPMC-L2](https://zenodo.org/records/14854217?token=eyJhbGciOiJIUzUxMiJ9.eyJpZCI6IjM5MDcwY2E5LTY0MzUtNGZhZC04NzA4LTczMjNhNTZiOGZmYSIsImRhdGEiOnt9LCJyYW5kb20iOiI1YWRkZmNiMmYyOGNiYzI4ZWUxY2QwNTAyY2YxNTY4ZiJ9.0Jr6GYfyyn02F96eVpkjOtcE-MM1wt-_ctOshdNGMUyUKI15-9Rfp9VF30_hYOTqv_9lLj-7Wj0qGyR3p9cA5w)
- **Description**: Contains music and corresponding stage lighting data in a format suitable for training Skip-BART.
- **Details**: Refer to the [paper](https://arxiv.org/abs/2506.01482) for dataset specifics.
## Usage
### Installation
```shell
git clone https://huggingface.co/RS2002/Skip-BART
```
### Example Code
```python
import torch
from model import Skip_BART
# Load the model
model = Skip_BART.from_pretrained("RS2002/Skip-BART")
# Example input
x_encoder = torch.rand((2, 1024, 512))
x_decoder = torch.randint(0, 10, (2, 1024, 2))
encoder_attention_mask = torch.zeros((2, 1024))
decoder_attention_mask = torch.zeros((2, 1024))
# Forward pass
output = model(x_encoder, x_decoder, encoder_attention_mask, decoder_attention_mask)
print(output.size()) # Output: [2, 1024, 1024]
``` |