Datasets:
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# ByteCameraDepth Dataset
|
6 |
+
|
7 |
+
ByteCameraDepth is a multi-camera depth estimation dataset containing synchronized depth, color, and auxiliary data captured from various 3D cameras. The dataset provides comprehensive depth sensing from multiple cameras in various in-door scenarios, making it ideal for developing and evaluating depth estimation algorithms.
|
8 |
+
|
9 |
+
## Dataset Overview
|
10 |
+
|
11 |
+
- **Purpose**: Multi-camera depth estimation research and benchmarking
|
12 |
+
- **Total Sessions**: 39 recording sessions
|
13 |
+
- **Uncompressed Size**: ~2.7TB
|
14 |
+
- **Data Collection System**: [Multi-Camera Depth Recording System](https://github.com/Ericonaldo/depth_recording)
|
15 |
+
- **License**: CC-BY-4.0
|
16 |
+
|
17 |
+
## Quick Start
|
18 |
+
|
19 |
+
### Data Extraction
|
20 |
+
|
21 |
+
The dataset is provided as split archive files. To extract the complete dataset:
|
22 |
+
|
23 |
+
```bash
|
24 |
+
cat recorded_data.tar.part.* | tar -xvf -
|
25 |
+
```
|
26 |
+
|
27 |
+
This will create a `recorded_data` folder containing all 39 recording sessions.
|
28 |
+
|
29 |
+
## Dataset Structure
|
30 |
+
|
31 |
+
### Archive Organization
|
32 |
+
|
33 |
+
```
|
34 |
+
recorded_data_packed/
|
35 |
+
βββ recorded_data.tar.part.000
|
36 |
+
βββ recorded_data.tar.part.001
|
37 |
+
βββ ...
|
38 |
+
βββ recorded_data.tar.part.136
|
39 |
+
```
|
40 |
+
|
41 |
+
### Extracted Data Structure
|
42 |
+
|
43 |
+
After extraction, the data is organized as follows:
|
44 |
+
|
45 |
+
```
|
46 |
+
recorded_data/
|
47 |
+
βββ YYYYMMDD_HHMM/ # Timestamp-based session folder (39 sessions total)
|
48 |
+
βββ camera_realsense_455/ # Intel RealSense D455
|
49 |
+
β βββ depth_000.png # 16-bit depth images
|
50 |
+
β βββ color_000.png # 8-bit color images
|
51 |
+
β βββ ...
|
52 |
+
βββ camera_realsense_d405/ # Intel RealSense D405
|
53 |
+
β βββ depth_000.png
|
54 |
+
β βββ color_000.png
|
55 |
+
β βββ ...
|
56 |
+
βββ camera_realsense_d415/ # Intel RealSense D415
|
57 |
+
β βββ depth_000.png
|
58 |
+
β βββ color_000.png
|
59 |
+
β βββ ...
|
60 |
+
βββ camera_realsense_d435/ # Intel RealSense D435
|
61 |
+
β βββ depth_000.png
|
62 |
+
β βββ color_000.png
|
63 |
+
β βββ ...
|
64 |
+
βββ camera_realsense_l515/ # Intel RealSense L515
|
65 |
+
β βββ depth_000.png
|
66 |
+
β βββ color_000.png
|
67 |
+
β βββ ...
|
68 |
+
βββ camera_kinect/ # Microsoft Azure Kinect
|
69 |
+
β βββ depth_000.png # 16-bit depth images
|
70 |
+
β βββ color_000.png # 8-bit color images
|
71 |
+
β βββ ir_000.png # Infrared images
|
72 |
+
β βββ ...
|
73 |
+
βββ camera_zed2i_neural/ # Stereolabs ZED2i (Neural mode)
|
74 |
+
β βββ raw_depth_000.npy # 32-bit float depth arrays
|
75 |
+
β βββ depth_000.png # 16-bit depth images
|
76 |
+
β βββ color_000.png # Color images
|
77 |
+
β βββ pcd_000.npy # Point cloud data (X,Y,Z)
|
78 |
+
β βββ normal_000.npy # Surface normal vectors
|
79 |
+
β βββ ...
|
80 |
+
βββ camera_zed2i_performance/
|
81 |
+
βββ camera_zed2i_quality/
|
82 |
+
βββ camera_zed2i_ultra/
|
83 |
+
βββ ...
|
84 |
+
```
|
85 |
+
|
86 |
+
## Camera Systems and Specifications
|
87 |
+
|
88 |
+
The dataset includes data collected by our [depth recording toolkit](https://github.com/Ericonaldo/depth_recording):
|
89 |
+
|
90 |
+
### Intel RealSense Cameras
|
91 |
+
|
92 |
+
- **Models**: D405, D415, D435, D455, L515
|
93 |
+
- **Output**: `depth_xxx.png` (16-bit), `color_xxx.png` (8-bit)
|
94 |
+
|
95 |
+
### Microsoft Azure Kinect
|
96 |
+
|
97 |
+
- **Depth Resolution**: Wide FOV unbinned
|
98 |
+
- **Output**: `depth_xxx.png` (16-bit), `color_xxx.png` (8-bit), `ir_xxx.png` (infrared)
|
99 |
+
|
100 |
+
### Stereolabs ZED2i
|
101 |
+
|
102 |
+
- **Depth Resolution**: 1280Γ720
|
103 |
+
- **Depth Modes**: 4 different modes (neural, performance, quality, ultra)
|
104 |
+
- **Output**:
|
105 |
+
- `raw_depth_xxx.npy` (32-bit float depth arrays)
|
106 |
+
- `depth_xxx.png` (16-bit depth images)
|
107 |
+
- `color_xxx.png` (8-bit color images)
|
108 |
+
- `pcd_xxx.npy` (point cloud data)
|
109 |
+
- `normal_xxx.npy` (surface normal vectors)
|
110 |
+
|
111 |
+
## Data Formats
|
112 |
+
|
113 |
+
### File Types and Specifications
|
114 |
+
|
115 |
+
| Data Type | Format | Bit Depth | Description |
|
116 |
+
|-----------|--------|-----------|-------------|
|
117 |
+
| Depth Images | PNG | 16-bit | Standard depth maps |
|
118 |
+
| Color Images | PNG | 8-bit RGB | Color/texture images |
|
119 |
+
| Raw Depth | NPY | 32-bit float | High-precision depth (ZED2i only) |
|
120 |
+
| Point Clouds | NPY | 32-bit float | 3D point coordinates (X,Y,Z) |
|
121 |
+
| Surface Normals | NPY | 32-bit float | Surface normal vectors |
|
122 |
+
| Infrared | PNG | 8-bit | IR images (Kinect only) |
|
123 |
+
|
124 |
+
### Depth Data
|
125 |
+
The unit of the depth data is 'mm' for most of the cameras, which means that we can obtain the 'm'-scale by dividing the raw depth by 1000.
|
126 |
+
Note that RealSense D405/L515 has different scales, which are 2500 and 10000, respectively. In other words, we should divide the raw depth by 2500 and 10000 to obtain the 'm'-scale depth.
|
127 |
+
|
128 |
+
### File Naming Convention
|
129 |
+
|
130 |
+
- Sequential numbering: `xxx` represents frame index (000, 001, 002, ...)
|
131 |
+
- Synchronized capture: Same frame numbers across cameras represent simultaneous capture
|
132 |
+
- Camera identification: Folder names clearly identify camera type and model
|
133 |
+
|
134 |
+
## π Citation
|
135 |
+
|
136 |
+
If you use this dataset in your research, please cite:
|
137 |
+
|
138 |
+
```bibtex
|
139 |
+
@article{liu2025manipulation,
|
140 |
+
title={Manipulation as in Simulation: Enabling Accurate Geometry Perception in Robots},
|
141 |
+
author={Liu, Minghuan and Zhu, Zhengbang and Han, Xiaoshen and Hu, Peng and Lin, Haotong and
|
142 |
+
Li, Xinyao and Chen, Jingxiao and Xu, Jiafeng and Yang, Yichu and Lin, Yunfeng and
|
143 |
+
Li, Xinghang and Yu, Yong and Zhang, Weinan and Kong, Tao and Kang, Bingyi},
|
144 |
+
journal={arXiv preprint},
|
145 |
+
year={2025}
|
146 |
+
}
|
147 |
+
```
|
148 |
+
|
149 |
+
## License
|
150 |
+
|
151 |
+
This dataset is released under the CC BY 4.0 License.
|