Update README.md
Browse files
README.md
CHANGED
@@ -1,100 +1,80 @@
|
|
1 |
---
|
2 |
-
|
|
|
3 |
language:
|
4 |
-
- en
|
5 |
-
- zh
|
6 |
-
|
7 |
-
-
|
8 |
-
|
9 |
-
-
|
10 |
-
---
|
11 |
-
# PKU-DyMVHumans
|
12 |
-
|
13 |
-
## Sources
|
14 |
-
|
15 |
-
Project page:https://pku-dymvhumans.github.io/
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
18 |
|
19 |
-
|
|
|
20 |
|
|
|
21 |
|
22 |
## Overview
|
23 |
|
24 |
-
|
25 |
-
|
26 |
-
It comprises 32 humans across 45 different dynamic scenarios, each featuring highly detailed appearances and complex human motions.
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
## Key Features
|
31 |
-
|
32 |
-
- **High-fidelity performance**:We construct a multi-view system to capture humans in motion, containing 56/60 synchronous cameras with 1080P or 4K resolution
|
33 |
-
|
34 |
-
- **High-detailed appearance**: It captures complex cloth deformation, and intricate texture details, like delicate satin ribbon and special headwear.
|
35 |
-
|
36 |
-
- **Complex human motion**: It covers a wide range of special costume performances, artistic movements, and sports activities.
|
37 |
|
38 |
-
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
-
##
|
42 |
|
43 |
-
###
|
44 |
|
45 |
-
|
46 |
-
|
|
|
47 |
|
48 |
-
|
|
|
|
|
49 |
|
50 |
-
|
51 |
|
52 |
-
-
|
|
|
|
|
|
|
|
|
53 |
|
54 |
-
|
55 |
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
-
|
|
|
58 |
|
59 |
-
For each scene, we provide the multi-view images (`./case_name/per_view/cam_*/images/`), the coarse foreground with RGBA channels (`./case_name/per_view/cam_*/images/`), as well as the coarse foreground segmentation (`./case_name/per_view/cam_*/pha/`), which are obtained using [BackgroundMattingV2](https://github.com/PeterL1n/BackgroundMattingV2).
|
60 |
|
61 |
-
To make the benchmarks easier compare with our dataset, we save different data formats (i.e., [Surface-SOS](https://github.com/zhengxyun/Surface-SOS), [NeuS](https://github.com/Totoro97/NeuS), [NeuS2](https://github.com/19reborn/NeuS2), [Instant-ngp](https://github.com/NVlabs/instant-ngp), and [3D-Gaussian](https://github.com/graphdeco-inria/gaussian-splatting)) of PKU-DyMVHumans at **Part1** and write a document that describes the data process.
|
62 |
|
63 |
|
|
|
64 |
```
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
| |--- per_view
|
70 |
-
| |--- per_frame
|
71 |
-
| |--- data_ngp
|
72 |
-
| |--- data_NeuS
|
73 |
-
| |--- data_NeuS2
|
74 |
-
| |--- data_COLMAP
|
75 |
-
| |--- <overview_fme_*.png>
|
76 |
-
|--- ...
|
77 |
-
|
78 |
-
```
|
79 |
-
|
80 |
-
|
81 |
-
## Benchmark
|
82 |
-
|
83 |
-
The objective of our benchmark is to achieve robust geometry reconstruction and novel view synthesis for dynamic humans under markerless and fixed multi-view camera settings, while minimizing the need for manual annotation and reducing time costs.
|
84 |
-
|
85 |
-
This includes **neural scene decomposition**, **novel view synthesis**, and **dynamic human modeling**.
|
86 |
-
|
87 |
-
## Citation
|
88 |
-
|
89 |
-
If you find this repo is helpful, please cite:
|
90 |
-
|
91 |
-
```
|
92 |
-
|
93 |
-
@article{zheng2024PKU-DyMVHumans,
|
94 |
-
title={PKU-DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling},
|
95 |
-
author={Zheng, Xiaoyun and Liao, Liwei and Li, Xufeng and Jiao, Jianbo and Wang, Rongjie and Gao, Feng and Wang, Shiqi and Wang, Ronggang},
|
96 |
-
journal={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
|
97 |
year={2024}
|
98 |
}
|
99 |
-
|
100 |
```
|
|
|
1 |
---
|
2 |
+
# Example metadata to be added to a dataset card.
|
3 |
+
# Full dataset card template at https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md
|
4 |
language:
|
5 |
+
- en
|
6 |
+
- zh
|
7 |
+
- fr
|
8 |
+
- ja
|
9 |
+
- es
|
10 |
+
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
+
tags:
|
13 |
+
- Multimedia
|
14 |
+
- Panoramic
|
15 |
+
- Video
|
16 |
+
- Multi-viewpoint
|
17 |
|
18 |
+
viewer: false
|
19 |
+
---
|
20 |
|
21 |
+
# <i>360+x</i> Dataset
|
22 |
|
23 |
## Overview
|
24 |
|
25 |
+
360+x dataset introduces a unique panoptic perspective to scene understanding, differentiating itself from traditional
|
26 |
+
datasets by offering multiple viewpoints and modalities, captured from a variety of scenes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
+
### Key Features:
|
29 |
|
30 |
+
- **Multi-viewpoint Captures:** Includes 360° panoramic video, third-person front view video, egocentric monocular
|
31 |
+
video, and egocentric binocular video.
|
32 |
+
- **Rich Audio Modalities:** Features normal audio and directional binaural delay.
|
33 |
+
- **2,152 multi-model videos** captured by 360 cameras and Spectacles camera (8579k frames in total) Captured in 17
|
34 |
+
cities across 5 countries, covering 28 scenes ranging from Artistic Spaces to Natural Landscapes.
|
35 |
+
- **Action Temporal Segmentation:** Provides labels for 38 action instances for each video pair.
|
36 |
|
37 |
+
## Dataset Details
|
38 |
|
39 |
+
### Project Description
|
40 |
|
41 |
+
- **Developed by:** Hao Chen, Yuqi Hou, Chenyuan Qu, Irene Testini, Xiaohan Hong, Jianbo Jiao
|
42 |
+
- **Funded by:** the Ramsay Research Fund, and the Royal Society Short Industry Fellowship
|
43 |
+
- **License:** Creative Commons Attribution-NonCommercial-ShareAlike 4.0
|
44 |
|
45 |
+
### Sources
|
46 |
+
- **Repository:** Coming Soon
|
47 |
+
- **Paper:** https://arxiv.org/abs/2404.00989
|
48 |
|
49 |
+
## Dataset Statistics
|
50 |
|
51 |
+
- **Total Videos:** 2,152, split between 464 videos captured using 360 cameras and 1,688 with Spectacles cameras.
|
52 |
+
- **Scenes:** 15 indoor and 13 outdoor, totaling 28 scene categories.
|
53 |
+
- **Short Clips:** The videos have been segmented into 1,380 shorter clips, each approximately 10 seconds long, totaling
|
54 |
+
around 67.78 hours.
|
55 |
+
- **Frames:** 8,579k frames across all clips.
|
56 |
|
57 |
+
## Dataset Structure
|
58 |
|
59 |
+
Our dataset offers a comprehensive collection of panoramic videos, binocular videos, and third-person videos, each pair
|
60 |
+
of videos accompanied by annotations. Additionally, it includes features extracted using I3D, VGGish, and ResNet-18.
|
61 |
+
Given the high-resolution nature of our dataset (5760x2880 for panoramic and binocular videos, 1920x1080 for
|
62 |
+
third-person front view videos), the overall size is considerably large. To accommodate diverse research needs and
|
63 |
+
computational resources, we also provide a lower-resolution version of the dataset (640x320 for panoramic and binocular
|
64 |
+
videos, 569x320 for third-person front view videos) available for download.
|
65 |
|
66 |
+
<b>In this repo, we provide the lower-resolution version of the dataset. To access the high-resolution version, please
|
67 |
+
visit the <a href="https://x360dataset.github.io/">official website</a>.</b>
|
68 |
|
|
|
69 |
|
|
|
70 |
|
71 |
|
72 |
+
## BibTeX
|
73 |
```
|
74 |
+
@inproceedings{chen2024x360,
|
75 |
+
title={360+x: A Panoptic Multi-modal Scene Understanding Dataset},
|
76 |
+
author={Chen, Hao and Hou, Yuqi and Qu, Chenyuan and Testini, Irene and Hong, Xiaohan and Jiao, Jianbo},
|
77 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
year={2024}
|
79 |
}
|
|
|
80 |
```
|