Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,11 @@ PKU-DyMVHumans is a versatile human-centric dataset designed for high-fidelity r
|
|
17 |
|
18 |
It comprises 32 humans across 45 different dynamic scenarios, each featuring highly detailed appearances and complex human motions.
|
19 |
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
### Key Features:
|
22 |
|
@@ -25,47 +30,66 @@ It comprises 32 humans across 45 different dynamic scenarios, each featuring hig
|
|
25 |
- **Complex human motion:** It covers a wide range of special costume performances, artistic movements, and sports activities.
|
26 |
- **Human-object/scene interactions:** It includes human-object interactions, multi-person interactions and complex scene effects (like smoking).
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
## Dataset Details
|
29 |
|
30 |
-
###
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
-
|
|
|
|
|
35 |
|
36 |
-
### Sources
|
37 |
-
- **Repository:** Coming Soon
|
38 |
-
- **Paper:** https://arxiv.org/abs/2404.00989
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
-
- **Total Videos:** 2,152, split between 464 videos captured using 360 cameras and 1,688 with Spectacles cameras.
|
43 |
-
- **Scenes:** 15 indoor and 13 outdoor, totaling 28 scene categories.
|
44 |
-
- **Short Clips:** The videos have been segmented into 1,380 shorter clips, each approximately 10 seconds long, totaling
|
45 |
-
around 67.78 hours.
|
46 |
-
- **Frames:** 8,579k frames across all clips.
|
47 |
|
48 |
## Dataset Structure
|
49 |
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
computational resources, we also provide a lower-resolution version of the dataset (640x320 for panoramic and binocular
|
55 |
-
videos, 569x320 for third-person front view videos) available for download.
|
56 |
|
57 |
-
<b>In this repo, we provide the lower-resolution version of the dataset. To access the high-resolution version, please
|
58 |
-
visit the <a href="https://x360dataset.github.io/">official website</a>.</b>
|
59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
|
61 |
|
62 |
|
63 |
## BibTeX
|
64 |
```
|
65 |
-
@
|
66 |
-
title={
|
67 |
-
author={
|
68 |
-
|
69 |
year={2024}
|
70 |
}
|
71 |
```
|
|
|
17 |
|
18 |
It comprises 32 humans across 45 different dynamic scenarios, each featuring highly detailed appearances and complex human motions.
|
19 |
|
20 |
+
### Sources
|
21 |
+
|
22 |
+
- **Project page:** https://pku-dymvhumans.github.io
|
23 |
+
- **Github:** https://github.com/zhengxyun/PKU-DyMVHumans
|
24 |
+
- **Paper:** https://arxiv.org/abs/2403.16080
|
25 |
|
26 |
### Key Features:
|
27 |
|
|
|
30 |
- **Complex human motion:** It covers a wide range of special costume performances, artistic movements, and sports activities.
|
31 |
- **Human-object/scene interactions:** It includes human-object interactions, multi-person interactions and complex scene effects (like smoking).
|
32 |
|
33 |
+
|
34 |
+
### Benchmark
|
35 |
+
|
36 |
+
The objective of our benchmark is to achieve robust geometry reconstruction and novel view synthesis for dynamic humans under markerless and fixed multi-view camera settings, while minimizing the need for manual annotation and reducing time costs.
|
37 |
+
|
38 |
+
This includes **neural scene decomposition**, **novel view synthesis**, and **dynamic human modeling**.
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
## Dataset Details
|
43 |
|
44 |
+
### Agreement
|
45 |
|
46 |
+
<b>Note that by downloading the datasets, you acknowledge that you have read the agreement, understand it, and agree to be bound by them: <\b>
|
47 |
+
|
48 |
+
- The PKU-DyMVHumans dataset is made available only for non-commercial research purposes. Any other use, in particular any use for commercial purposes, is prohibited.
|
49 |
+
- You agree not to further copy, publish or distribute any portion of the dataset.
|
50 |
+
- Peking University reserves the right to terminate your access to the dataset at any time.
|
51 |
|
|
|
|
|
|
|
52 |
|
53 |
+
### Dataset Statistics
|
54 |
+
|
55 |
+
- **Scenes:** 45 different dynamic scenarios, engaging in various actions and clothing styles.
|
56 |
+
- **Actions** 4 different action types: dance, kungfu, sport, and fashion show.
|
57 |
+
- **Individual:** 32 professional players, including 16 males, 11 females, and 5 children.
|
58 |
+
- **Frames:** totalling approximately 8.2 million frames.
|
59 |
|
|
|
|
|
|
|
|
|
|
|
60 |
|
61 |
## Dataset Structure
|
62 |
|
63 |
+
For each scene, we provide the multi-view images (`./case_name/per_view/cam_*/images/`), the coarse foreground with RGBA channels (`./case_name/per_view/cam_*/images/`),
|
64 |
+
as well as the coarse foreground segmentation (`./case_name/per_view/cam_*/pha/`), which are obtained using [BackgroundMattingV2](https://github.com/PeterL1n/BackgroundMattingV2).
|
65 |
+
|
66 |
+
To make the benchmarks easier compare with our dataset, we save different data formats (i.e., [Surface-SOS](https://github.com/zhengxyun/Surface-SOS), [NeuS](https://github.com/Totoro97/NeuS), [NeuS2](https://github.com/19reborn/NeuS2), [Instant-ngp](https://github.com/NVlabs/instant-ngp), and [3D-Gaussian](https://github.com/graphdeco-inria/gaussian-splatting)) of PKU-DyMVHumans at **Part1** and write a document that describes the data process.
|
|
|
|
|
67 |
|
|
|
|
|
68 |
|
69 |
+
```
|
70 |
+
.
|
71 |
+
|--- <case_name>
|
72 |
+
| |--- cams
|
73 |
+
| |--- videos
|
74 |
+
| |--- per_view
|
75 |
+
| |--- per_frame
|
76 |
+
| |--- data_ngp
|
77 |
+
| |--- data_NeuS
|
78 |
+
| |--- data_NeuS2
|
79 |
+
| |--- data_COLMAP
|
80 |
+
| |--- <overview_fme_*.png>
|
81 |
+
|--- ...
|
82 |
+
|
83 |
+
```
|
84 |
|
85 |
|
86 |
|
87 |
## BibTeX
|
88 |
```
|
89 |
+
@article{zheng2024DyMVHumans,
|
90 |
+
title={PKU-DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling},
|
91 |
+
author={Zheng, Xiaoyun and Liao, Liwei and Li, Xufeng and Jiao, Jianbo and Wang, Rongjie and Gao, Feng and Wang, Shiqi and Wang, Ronggang},
|
92 |
+
journal={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
|
93 |
year={2024}
|
94 |
}
|
95 |
```
|