loua19 commited on
Commit
9911b60
·
1 Parent(s): 0c2eef3
README.md CHANGED
@@ -12,6 +12,7 @@ tags:
12
  # Model
13
 
14
  <!-- TODO: Add plan and video of the demo -->
 
15
 
16
  `Aria-medium-base` is a pretrained autoregressive generative model for symbolic music based loosely on the LLaMA 3.2 (1B) architecture. It was trained on ~60k hours of MIDI transcriptions of expressive solo-piano recordings. It has been finetuned to produce realistic continuations of solo-piano compositions ([huggingface](https://example.com/)) as well as to produce general-purpose contrastive MIDI embeddings ([huggingface](https://example.com/)).
17
 
@@ -28,7 +29,7 @@ The model is most naturally suited for generating continuations of existing MIDI
28
  Due to overrepresentation of performances of popular compositions (e.g., those from well-known classical composers such as Chopin) and difficulties in completely deduplicating the training data, some of these compositions have been compositionally memorized by the model. We suggest performing inference with lesser-known compositions or your own music for more original results.
29
 
30
  ### Input Quality Considerations
31
- Since the model has not been post-trained with any instruction or RLHF (similar to pre-instruct GPT models), it is very sensitive to input quality and performs best when prompted with well-played music. To get sample MIDI files, see the `example-prompts/` directory or explore the [Aria-MIDI](https://huggingface.co/datasets/loubb/aria-midi) dataset.
32
 
33
  ## Quickstart
34
 
@@ -91,4 +92,4 @@ The Aria project has been kindly supported by EleutherAI, Stability AI, as well
91
  year={2025},
92
  url={https://arxiv.org/abs/2504.15071}
93
  }
94
- ```
 
12
  # Model
13
 
14
  <!-- TODO: Add plan and video of the demo -->
15
+ <!-- TODO: Add links to the embedding hf page. Put the annealed model in this page-->
16
 
17
  `Aria-medium-base` is a pretrained autoregressive generative model for symbolic music based loosely on the LLaMA 3.2 (1B) architecture. It was trained on ~60k hours of MIDI transcriptions of expressive solo-piano recordings. It has been finetuned to produce realistic continuations of solo-piano compositions ([huggingface](https://example.com/)) as well as to produce general-purpose contrastive MIDI embeddings ([huggingface](https://example.com/)).
18
 
 
29
  Due to overrepresentation of performances of popular compositions (e.g., those from well-known classical composers such as Chopin) and difficulties in completely deduplicating the training data, some of these compositions have been compositionally memorized by the model. We suggest performing inference with lesser-known compositions or your own music for more original results.
30
 
31
  ### Input Quality Considerations
32
+ Since the model has not been post-trained with any instruction tuning or RLHF (similar to pre-instruct GPT models), it is very sensitive to input quality and performs best when prompted with well-played music. To get sample MIDI files, see the `example-prompts/` directory or explore the [Aria-MIDI](https://huggingface.co/datasets/loubb/aria-midi) dataset.
33
 
34
  ## Quickstart
35
 
 
92
  year={2025},
93
  url={https://arxiv.org/abs/2504.15071}
94
  }
95
+ ```
example-prompts/classical.mid ADDED
Binary file (10 kB). View file
 
example-prompts/nocturne.mid ADDED
Binary file (7.44 kB). View file
 
example-prompts/pokey_jazz.mid ADDED
Binary file (14.2 kB). View file
 
example-prompts/smooth_jazz.mid ADDED
Binary file (6.96 kB). View file
 
example-prompts/yesterday.mid ADDED
Binary file (6.54 kB). View file