Dataset Viewer
Auto-converted to Parquet
id
stringlengths
5
5
rgb
imagewidth (px)
640
640
depth
imagewidth (px)
640
640
semantic
imagewidth (px)
640
640
instance
imagewidth (px)
640
640
00000
00001
00002
00003
00004
00005
00006
00007
00008
00009
00011
00012
00013
00014
00016
00017
00018
00019
00020
00021
00022
00024
00025
00026
00027
00028
00033
00034
00035
00036
00037
00038
00039
00040
00041
00042
00045
00046
00047
00050
00052
00053
00055
00057
00060
00061
00062
00064
00066
00068
00069
00071
00072
00073
00074
00075
00077
00079
00080
00082
00084
00085
00087
00088
00089
00090
00091
00092
00093
00094
00095
00096
00097
00098
00102
00103
00104
00105
00106
00108
00110
00112
00114
00116
00117
00118
00119
00120
00121
00122
00124
00125
00127
00130
00131
00132
00133
00134
00136
00137
End of preview. Expand in Data Studio

NYUv2

This is an unofficial and preprocessed version of NYU Depth Dataset V2 made available for easier integration with modern ML workflows. The dataset was converted from the original .mat format into a split structure with embedded RGB images, depth maps, semantic masks, and instance masks in Hugging Face-compatible format.

πŸ“Έ Sample Visualization

RGB
RGB
Depth
Depth (Jet colormap)
Semantic
Semantic Mask

NYUv2 is a benchmark RGB-D dataset widely used for scene understanding tasks such as:

  • Indoor semantic segmentation
  • Depth estimation
  • Instance segmentation

This version has been preprocessed to include aligned:

  • Undistorted RGB images (.png)
  • Depth maps in millimeters (.tiff, uint16)
  • Semantic masks (.tiff, scaled uint16)
  • Instance masks (.tiff, scaled uint16)

Each sample is annotated with a consistent id and split across train/val/test.

🧾 Dataset Metadata

Additional files included:

  • camera_params.json β€” camera intrinsics and distortion
  • class_names.json β€” mapping from class IDs to human-readable names
  • scaling_factors.json β€” used for metric depth and label/mask de-scaling during training

πŸš€ How to Use

You can load the dataset using the datasets library:

from datasets import load_dataset

dataset = load_dataset("jagennath-hari/nyuv2", split="train")
sample = dataset[0]

# Access fields
rgb = sample["rgb"]
depth = sample["depth"]
semantic = sample["semantic"]
instance = sample["instance"]

πŸ”„ Recover Original Values from TIFF Images

The dataset uses .tiff format for all dense outputs to preserve precision and visual compatibility. Here’s how to revert them back to their original values:

from datasets import load_dataset
from huggingface_hub import snapshot_download
from PIL import Image
import numpy as np
import json
import os

# Load sample
dataset = load_dataset("jagennath-hari/nyuv2", split="train")
sample = dataset[0]

# Download and load scaling metadata
local_dir = snapshot_download(
    repo_id="jagennath-hari/nyuv2",
    repo_type="dataset",
    allow_patterns="scaling_factors.json"
)
with open(os.path.join(local_dir, "scaling_factors.json")) as f:
    scale = json.load(f)

depth_scale = scale["depth_scale"]
label_max = scale["label_max_value"]
instance_max = scale["instance_max_value"]

# === Unscale depth (mm β†’ m)
depth_img = np.array(sample["depth"])
depth_m = depth_img.astype(np.float32) / depth_scale

# === Unscale semantic mask
sem_scaled = np.array(sample["semantic"])
semantic_labels = np.round(
    sem_scaled.astype(np.float32) * (label_max / 65535.0)
).astype(np.uint16)

# === Unscale instance mask
inst_scaled = np.array(sample["instance"])
instance_ids = np.round(
    inst_scaled.astype(np.float32) * (instance_max / 65535.0)
).astype(np.uint16)

πŸ“ Scaling Factors Summary

Field Stored As Original Format Scaling Method Undo Formula
depth uint16, mm float32, meters multiplied by depth_scale depth / depth_scale
semantic uint16, scaled uint16 class IDs scaled by 65535 / label_max round(mask * (label_max / 65535.0))
instance uint16, scaled uint16 instance IDs scaled by 65535 / instance_max round(mask * (instance_max / 65535.0))

πŸ“„ Citation

If you use this dataset, please cite the original authors:

@inproceedings{Silberman:ECCV12,
  author    = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
  title     = {Indoor Segmentation and Support Inference from RGBD Images},
  booktitle = {ECCV},
  year      = {2012}
}
Downloads last month
179