Datasets:

Modalities:
Text
Formats:
text
ArXiv:
Libraries:
Datasets
License:
nielsr HF Staff commited on
Commit
1a3bb0b
Β·
verified Β·
1 Parent(s): bc90d11

Add task category, link to paper

Browse files

This PR adds the `video-to-audio` task category to the dataset card, linking the dataset to the appropriate category.

It also ensures the dataset is linked to (and can be found at) https://huggingface.co/papers/2406.04321.

Files changed (1) hide show
  1. README.md +44 -42
README.md CHANGED
@@ -1,42 +1,44 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
4
-
5
- # V2M Dataset: A Large-Scale Video-to-Music Dataset 🎢
6
-
7
- **The V2M dataset is proposed in the [VidMuse project](https://vidmuse.github.io/), aimed at advancing research in video-to-music generation.**
8
-
9
- ## ✨ Dataset Overview
10
-
11
- The V2M dataset comprises 360K pairs of videos and music, covering various types including movie trailers, advertisements, and documentaries. This dataset provides researchers with a rich resource to explore the relationship between video content and music generation.
12
-
13
-
14
- ## πŸ› οΈ Usage Instructions
15
-
16
- - Download the dataset:
17
-
18
- ```bash
19
- git clone https://huggingface.co/datasets/Zeyue7/V2M
20
- ```
21
-
22
- - Dataset structure:
23
-
24
- ```
25
- V2M/
26
- β”œβ”€β”€ V2M.txt
27
- β”œβ”€β”€ V2M-20k.txt
28
- └── V2M-bench.txt
29
- ```
30
-
31
- ## 🎯 Citation
32
-
33
- If you use the V2M dataset in your research, please consider citing:
34
-
35
- ```
36
- @article{tian2024vidmuse,
37
- title={Vidmuse: A simple video-to-music generation framework with long-short-term modeling},
38
- author={Tian, Zeyue and Liu, Zhaoyang and Yuan, Ruibin and Pan, Jiahao and Liu, Qifeng and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike},
39
- journal={arXiv preprint arXiv:2406.04321},
40
- year={2024}
41
- }
42
- ```
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - video-to-audio
5
+ ---
6
+
7
+ # V2M Dataset: A Large-Scale Video-to-Music Dataset 🎢
8
+
9
+ **The V2M dataset is proposed in the [VidMuse project](https://vidmuse.github.io/), aimed at advancing research in video-to-music generation. See the paper [VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling](https://huggingface.co/papers/2406.04321) for more details.**
10
+
11
+ ## ✨ Dataset Overview
12
+
13
+ The V2M dataset comprises 360K pairs of videos and music, covering various types including movie trailers, advertisements, and documentaries. This dataset provides researchers with a rich resource to explore the relationship between video content and music generation.
14
+
15
+
16
+ ## πŸ› οΈ Usage Instructions
17
+
18
+ - Download the dataset:
19
+
20
+ ```bash
21
+ git clone https://huggingface.co/datasets/Zeyue7/V2M
22
+ ```
23
+
24
+ - Dataset structure:
25
+
26
+ ```
27
+ V2M/
28
+ β”œβ”€β”€ V2M.txt
29
+ β”œβ”€β”€ V2M-20k.txt
30
+ └── V2M-bench.txt
31
+ ```
32
+
33
+ ## 🎯 Citation
34
+
35
+ If you use the V2M dataset in your research, please consider citing:
36
+
37
+ ```
38
+ @article{tian2024vidmuse,
39
+ title={Vidmuse: A simple video-to-music generation framework with long-short-term modeling},
40
+ author={Tian, Zeyue and Liu, Zhaoyang and Yuan, Ruibin and Pan, Jiahao and Liu, Qifeng and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike},
41
+ journal={arXiv preprint arXiv:2406.04321},
42
+ year={2024}
43
+ }
44
+ ```