Update README.md
#1
by
zhubofei
- opened
README.md
CHANGED
@@ -19,7 +19,7 @@ tags:
|
|
19 |
# Model Card for Model ID
|
20 |
## Model Details
|
21 |
### Model Description
|
22 |
-
The model consists of a music encoder ```MERT-v1-300M```, a natural language decoder ```vicuna-7b-delta-v0```, and a linear projection
|
23 |
|
24 |
This checkpoint of MusiLingo is developed on the MusicQA and can answer instructions with music raw audio, such as querying about the tempo, emotion, genre, tags or subjective feelings etc.
|
25 |
You can use the MusicQA dataset for the following demo. For the implementation of MusicQA, please refer to our [Github repo](https://github.com/zihaod/MusiLingo/blob/main/musilingo/datasets/datasets/musicqa_dataset.py).
|
|
|
19 |
# Model Card for Model ID
|
20 |
## Model Details
|
21 |
### Model Description
|
22 |
+
The model consists of a music encoder ```MERT-v1-300M```, a natural language decoder ```vicuna-7b-delta-v0```, and a linear projection layer between the two.
|
23 |
|
24 |
This checkpoint of MusiLingo is developed on the MusicQA and can answer instructions with music raw audio, such as querying about the tempo, emotion, genre, tags or subjective feelings etc.
|
25 |
You can use the MusicQA dataset for the following demo. For the implementation of MusicQA, please refer to our [Github repo](https://github.com/zihaod/MusiLingo/blob/main/musilingo/datasets/datasets/musicqa_dataset.py).
|