|
--- |
|
language: |
|
- en |
|
- zh |
|
license: c-uda |
|
tags: |
|
- Video |
|
- Multi-viewpoint |
|
viewer: false |
|
--- |
|
|
|
# <i>PKU-DyMVHumans</i> Dataset |
|
|
|
## Overview |
|
|
|
PKU-DyMVHumans is a versatile human-centric dataset designed for high-fidelity reconstruction and rendering of dynamic human performances in markerless multi-view capture settings. |
|
|
|
It comprises 32 humans across 45 different dynamic scenarios, each featuring highly detailed appearances and complex human motions. |
|
|
|
### Sources |
|
|
|
- **Project page:** https://pku-dymvhumans.github.io |
|
- **Github:** https://github.com/zhengxyun/PKU-DyMVHumans |
|
- **Paper:** https://arxiv.org/abs/2403.16080 |
|
|
|
### Key Features: |
|
|
|
- **High-fidelity performance:** We construct a multi-view system to capture humans in motion, containing 56/60 synchronous cameras with 1080P or 4K resolution. |
|
- **High-detailed appearance:** It captures complex cloth deformation, and intricate texture details, like delicate satin ribbon and special headwear. |
|
- **Complex human motion:** It covers a wide range of special costume performances, artistic movements, and sports activities. |
|
- **Human-object/scene interactions:** These include human-object interactions, as well as challenging multi-person interactions and complex scene effects (e.g., lighting, shadows, and smoking). |
|
|
|
### Benchmark |
|
|
|
The objective of our benchmark is to achieve robust geometry reconstruction and novel view synthesis for dynamic humans under markerless and fixed multi-view camera settings, while minimizing the need for manual annotation and reducing time costs. |
|
|
|
This includes **neural scene decomposition**, **novel view synthesis**, and **dynamic human modeling**. |
|
|
|
|
|
|
|
## Dataset Details |
|
|
|
### Agreement |
|
|
|
Note that by downloading the datasets, you acknowledge that you have read the agreement, understand it, and agree to be bound by them: |
|
|
|
- The PKU-DyMVHumans dataset is made available only for non-commercial research purposes. Any other use, in particular any use for commercial purposes, is prohibited. |
|
- You agree not to further copy, publish or distribute any portion of the dataset. |
|
- Peking University reserves the right to terminate your access to the dataset at any time. |
|
|
|
|
|
### Dataset Statistics |
|
|
|
- **Scenes:** 45 different dynamic scenarios, engaging in various actions and clothing styles. |
|
- **Actions:** 4 different action types: dance, kungfu, sport, and fashion show. |
|
- **Individual:** 32 professional players, including 16 males, 11 females, and 5 children. |
|
- **Frames:** totalling approximately 8.2 million frames. |
|
|
|
|
|
## Dataset Structure |
|
|
|
For each scene, we provide the multi-view images (`./case_name/per_view/cam_*/images/`), the coarse foreground with RGBA channels (`./case_name/per_view/cam_*/images/`), |
|
as well as the coarse foreground segmentation (`./case_name/per_view/cam_*/pha/`), which are obtained using [BackgroundMattingV2](https://github.com/PeterL1n/BackgroundMattingV2). |
|
|
|
To make the benchmarks easier compare with our dataset, we save different data formats (i.e., [Surface-SOS](https://github.com/zhengxyun/Surface-SOS), [NeuS](https://github.com/Totoro97/NeuS), [NeuS2](https://github.com/19reborn/NeuS2), [Instant-ngp](https://github.com/NVlabs/instant-ngp), and [3D-Gaussian](https://github.com/graphdeco-inria/gaussian-splatting)) of PKU-DyMVHumans at **Part1** and write a document that describes the data process. |
|
|
|
|
|
``` |
|
. |
|
|--- <case_name> |
|
| |--- cams |
|
| |--- videos |
|
| |--- per_view |
|
| |--- per_frame |
|
| |--- data_ngp |
|
| |--- data_NeuS |
|
| |--- data_NeuS2 |
|
| |--- data_COLMAP |
|
| |--- <overview_fme_*.png> |
|
|--- ... |
|
|
|
``` |
|
|
|
|
|
|
|
## BibTeX |
|
``` |
|
@article{zheng2024DyMVHumans, |
|
title={PKU-DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling}, |
|
author={Zheng, Xiaoyun and Liao, Liwei and Li, Xufeng and Jiao, Jianbo and Wang, Rongjie and Gao, Feng and Wang, Shiqi and Wang, Ronggang}, |
|
journal={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, |
|
year={2024} |
|
} |
|
``` |