Datasets:
ArXiv:
License:
File size: 8,684 Bytes
ea0d6d8 6c2348e ea0d6d8 4c82e51 ea0d6d8 4c82e51 ea0d6d8 4c82e51 ea0d6d8 4491813 2a2bd9a ea0d6d8 4c82e51 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
---
license: apache-2.0
---
## π· MultiCamVideo Dataset
### 1. Dataset Introduction
**TL;DR:** The MultiCamVideo Dataset is a multi-camera synchronized video dataset rendered using Unreal Engine 5. It includes synchronized multi-camera videos and its corresponding camera trajectories. The MultiCamVideo Dataset can be valuable in fields such as camera-controlled video generation, synchronized video production, and 3D/4D reconstruction.
<div align="center">
<video controls autoplay style="width: 70%;" src="https://cdn-uploads.huggingface.co/production/uploads/6530bf50f145530101ec03a2/r-cc03Z6b5v_X5pkZbIZR.mp4"></video>
</div>
The MultiCamVideo Dataset is a multi-camera synchronized video dataset rendered using Unreal Engine 5. It includes synchronized multi-camera videos and its corresponding camera trajectories.
It consists of 13.6K different dynamic scenes, each captured by 10 cameras, resulting in a total of 136K videos. Each dynamic scene is composed of four elements: {3D environment, character, animation, camera}. Specifically, we use animation to drive the character,
and positioning the animated character within the 3D environment. Then, Time-synchronized cameras are then set up to move along predefined trajectories to render the multi-camera video data.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6530bf50f145530101ec03a2/Ea0Feqy7uBTLczyPal-CE.png" alt="Example Image" width="70%">
</p>
**3D Environment:** We collect 37 high-quality 3D environments assets from [Fab](https://www.fab.com). To minimize the domain gap between rendered data and real-world videos, we primarily select visually realistic 3D scenes, while choosing a few stylized or surreal 3D scenes as a supplement. To ensure data diversity, the select scenes cover a variety of indoor and outdoor settings, such as city streets, shopping malls, cafes, office rooms, and countryside.
**Character:** We collect 66 different human 3D models as characters from [Fab](https://www.fab.com) and [Mixamo](https://www.mixamo.com).
**Animation:** We collect 93 different animations from [Fab](https://www.fab.com) and [Mixamo](https://www.mixamo.com), including common actions such as waving, dancing, and cheering. We use these animations to drive the collected characters and created diverse datasets through various combinations.
**Camera:** To ensure camera movements are diverse and closely resemble real-world distributions, we create a wide range of camera trajectories and parameters to cover various situations. To achieve this by designing rules to batch-generate random camera starting positions and movement trajectories:
1. Camera Starting Position.
We take the character's position as the center of a hemisphere with a radius of {3m, 5m, 7m, 10m} based on the size of the 3D scene and randomly sample within this range as the camera's starting point, ensuring the closest distance to the character is greater than 0.5m and the pitch angle is within 45 degrees.
2. Camera Trajectories.
- **Pan & Tilt**:
The camera rotation angles are randomly selected within the range, with pan angles ranging from 5 to 45 degrees and tilt angles ranging from 5 to 30 degrees, with directions randomly chosen left/right or up/down.
- **Basic Translation**:
The camera translates along the positive and negative directions of the xyz axes, with movement distances randomly selected within the range of \\([\frac{1}{4}, 1] \times\\) distance2character.
- **Basic Arc Trajectory**:
The camera moves along an arc, with rotation angles randomly selected within the range of 15 to 75 degrees.
- **Random Trajectories**:
1-3 points are sampled in space, and the camera moves from the initial position through these points as the movement trajectory, with the total movement distance randomly selected within the range of \\([\frac{1}{4}, 1] \times\\) distance2character. The polyline is smoothed to make the movement more natural.
- **Static Camera**:
The camera does not translate or rotate during shooting, maintaining a fixed position.
3. Camera Movement Speed.
To further enhance the diversity of trajectories, 50% of the training data uses constant-speed camera trajectories, while the other 50% uses variable-speed trajectories generated by nonlinear functions. Consider a camera trajectory with a total of \\(f\\) frames, starting at location \\(L_{start}\\) and ending at position \\(L_{end}\\). The location at the \\(i\\)-th frame is given by:
\\(L_i = L_{start} + (L_{end} - L_{start}) \cdot \left( \frac{1 - \exp(-a \cdot i/f)}{1 - \exp(-a)} \right),\\)
where \\(a\\) is an adjustable parameter to control the trajectory speed. When \\(a > 0\\), the trajectory starts fast and then slows down; when \\(a < 0\\), the trajectory starts slow and then speeds up. The larger the absolute value of \\(a\\), the more drastic the change.
4. Camera Parameters.
We chose four set of camera parameters: {focal=18mm, aperture=10}, {focal=24mm, aperture=5}, {focal=35mm, aperture=2.4} and {focal=50mm, aperture=2.4}.
### 2. Statistics and Configurations
Dataset Statistics:
| Number of Dynamic Scenes | Camera per Scene | Total Videos | Zip File Size |
|:------------------------:|:----------------:|:------------:|:-------------:|
| 13,600 | 10 | 136,000 | 312G |
Video Configurations:
| Resolution | Frame Number | FPS |
|:-----------:|:------------:|:------------------------:|
| 1280x1280 | 81 | 15 |
Note: You can use 'center crop' to adjust the video's aspect ratio to fit your video generation model, such as 16:9, 9:16, 4:3, or 3:4.
Camera Configurations:
| Focal Length | Aperture | Sensor Height | Sensor Width |
|:-----------------------:|:------------------:|:-------------:|:------------:|
| 18mm, 24mm, 35mm, 50mm | 10.0, 5.0, 2.4 | 23.76mm | 23.76mm |
### 3. File Structure
```
MultiCamVideo-Dataset
βββ train
β βββ f18_aperture10
β β βββ scene1 # one dynamic scene
β β β βββ videos
β β β β βββ cam01.mp4 # synchronized 81-frame videos at 1280x1280 resolution
β β β β βββ cam02.mp4
β β β β βββ ...
β β β β βββ cam10.mp4
β β β βββ cameras
β β β βββ camera_extrinsics.json # 81-frame camera extrinsics of the 10 cameras
β β βββ ...
β β βββ scene3400
β βββ f24_aperture5
β β βββ scene1
β β βββ ...
β β βββ scene3400
β βββ f35_aperture2.4
β β βββ scene1
β β βββ ...
β β βββ scene3400
β βββ f50_aperture2.4
β βββ scene1
β βββ ...
β βββ scene3400
βββ val
βββ 10basic_trajectories
βββ videos
β βββ cam01.mp4 # example videos corresponding to the validation cameras
β βββ cam02.mp4
β βββ ...
β βββ cam10.mp4
βββ cameras
βββ camera_extrinsics.json # 10 different trajectories for validation
```
### 4. Useful scripts
- Data Extraction
```bash
cat MultiCamVideo-Dataset.part* > MultiCamVideo-Dataset.tar.gz
tar -xzvf MultiCamVideo-Dataset.tar.gz
```
- Camera Visualization
```python
python vis_cam.py
```
The visualization script is modified from [CameraCtrl](https://github.com/hehao13/CameraCtrl/blob/main/tools/visualize_trajectory.py), thanks for their inspiring work.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6530bf50f145530101ec03a2/q5whL09UsZnrtD4xO9EbR.png" alt="Example Image" width="40%">
</p>
## Citation
If you found this dataset useful, please cite our [paper](https://arxiv.org/abs/2503.11647).
```bibtex
@misc{bai2025recammaster,
title={ReCamMaster: Camera-Controlled Generative Rendering from A Single Video},
author={Jianhong Bai and Menghan Xia and Xiao Fu and Xintao Wang and Lianrui Mu and Jinwen Cao and Zuozhu Liu and Haoji Hu and Xiang Bai and Pengfei Wan and Di Zhang},
year={2025},
eprint={2503.11647},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.11647},
}
```
## Contact
[[email protected]]([email protected])
# Acknowledgments
We thank Jinwen Cao, Yisong Guo, Haowen Ji, Jichao Wang, and Yi Wang from Kuaishou Technology for their invaluable help in constructing the MultiCamVideo Dataset. |