zirui-wang's picture
Update README.md
9c89aa9 verified
metadata
task_categories:
  - image-to-3d
  - image-feature-extraction
language:
  - en
tags:
  - NVS
  - 3DGS
  - Relocalization
  - Egocentric
size_categories:
  - 100K<n<1M

Oxford Day and Night Dataset

Project Page | arXiv | Video

Zirui Wang*¹, Wenjing Bian*¹, Xinghui Li*¹, Yifu Tao², Jianeng Wang², Maurice Fallon², Victor Adrian Prisacariu¹.

*Equal Contribution.
¹Active Vision Lab (AVL) + ²Dynamic Robot Systems Group (DRSG)
University of Oxford.


Overview

We recorded 124 egocentric videos of 5 locations in Oxford, UK, under 3 different lightning conditions, day, dusk, and night.

This dataset offers a unique combination of:

  • large-scale (30 kilometers camera trajectory and 40,000 square meters area.)
  • egocentric view
  • varying environments (outdoor + indoor)
  • varying lightning conditions (day, dusk, night)
  • accurate camera parameters (via a robust multi-session SLAM system provided by Meta ARIA MPS.)
  • ground truth 3D geometry (via state of the art LIDAR scanned 3D models)

Table of Contents

Data Structure

[4.0K]  aria/bodleian-library
├── [4.0K]  mp4
│   └── [4.0K]  blur  [44 .mp4 files]
├── [4.0K]  mps
│   └── [4.0K]  multi
│       ├── [ 17G]  day_23.tar.gz
│       ├── [ 943]  day_23.txt
│       ├── [ 23G]  day_night_44.tar.gz
│       ├── [1.8K]  day_night_44.txt
│       ├── [5.2G]  night_21.tar.gz
│       └── [ 861]  night_21.txt
├── [4.0K]  ns_processed
│   └── [4.0K]  multi
│       ├── [4.0K]  fisheye624
│       │   ├── [4.3G]  day_23.tar.gz
│       │   ├── [7.0G]  day_night_44.tar.gz
│       │   └── [2.7G]  night_21.tar.gz
│       ├── [4.0K]  undistorted_all_valid
│       │   ├── [4.3G]  day_23.tar.gz
│       │   ├── [6.9G]  day_night_44.tar.gz
│       │   └── [2.6G]  night_21.tar.gz
│       └── [4.0K]  undistorted_max_fov
│           ├── [4.0G]  day_23.tar.gz
│           ├── [6.1G]  day_night_44.tar.gz
│           └── [2.1G]  night_21.tar.gz
└── [4.0K]  visual_reloc
    ├── [4.0K]  colmap
    │   ├── [4.0K]  bin
    │   │   ├── [  56]  cameras.bin
    │   │   ├── [172M]  images.bin
    │   │   └── [250M]  points3D.bin
    │   └── [4.0K]  text
    │       ├── [ 138]  cameras.txt
    │       ├── [157M]  images.txt
    │       └── [278M]  points3D.txt
    └── [4.0K]  imagelists
        ├── [ 85K]  db_imagelist.txt
        ├── [192K]  db_imagelist_with_intrinsics.txt
        ├── [ 44K]  query_day_imagelist.txt
        ├── [ 99K]  query_day_imagelist_with_intrinsics.txt
        ├── [ 99K]  query_night_imagelist.txt
        └── [221K]  query_night_imagelist_with_intrinsics.txt

File explanation:

Directory mp4

Contains faceblurred video. Mainly for dataset preview.

Directory mps

Contains multi-session MPS results, ie, semi-dense point clouds, per image camera parameters and 2D observations.

Directory ns_processed

For NVS tasks, contains processed images and poses in ns_processed directory.

  • poses: camera poses are provided in two convenient conventions:
    • transforms.json: 4x4 C2W camera transformations in OpenGL coordinate. This format is directly used by NerfStudio.
    • transforms_opencv.json: 4x4 W2C camera transformations in OpenCV coordinate. This format is widely used in 3DGS, SLAM, and SfM community.
  • images: we provide both fisheye and undistorted images.
    • fisheye624: original fisheye RGB image recorded by ARIA glasses.
    • undistorted_max_fov: undistorted using an approximate max fov given the lens parameters. These images are fully undistorted but has black borders. Comparing with undistorted_all_valid, it has larger field of view but you need to handle the black borders using valid pixel masks (provided in the tar.gz file).
    • undistorted_all_valid: undistorted using an smaller fov comparing with undistorted_max_fov. All pixels on these images are valid (no black borders). These images offers easier usage but smaller field of view.
  • lighting: pattern day_<num_sessions>, night_<num_sessions>, day_night_<num_sessions> denotes sessions grouped by different lighting conditions, where <num_sessions> is the number of sessions in this group.

Directory visual_reloc

For Visual Localization tasks, this can be easily plugged into HLoc benchmark and Scene Coordinate Regression networks training.

  • colmap: contains poses, 2D observations, 3D points in COLMAP format (.bin and .txt).
  • imagelists: contains training and testing split.
    • db_*.txt: image list when building the database for HLoc, and when training Scene Coordinate Regression networks.
    • query_*.txt: image list for testing.

Ground truth 3D geometry

We use 3D Laser scanned point clouds provide by Oxford-Spires dataset. Download the dataset from:

Download

We provided a script to download the dataset hf_download.py. Usage:

pip install -U "huggingface_hub[cli]" 
python hf_download.py -o <your_local_dir>/oxford-day-and-night

or directly run the following python code.

from huggingface_hub import snapshot_download
snapshot_download(
    repo_id="active-vision-lab/oxford-day-and-night", 
    repo_type="dataset", 
    local_dir="<your_local_dir>/oxford-day-and-night"
)

Application: Novel View Synthesis

This dataset can be easily trained using NerfStudio and 3DGS-extended codebases.

Train a splatfacto or splatfacto-w model on a day-only split:

ns-train splatfacto --data <path>
ns-train splatfacto-w-light --data <path>

For splatfacto-w-light and splatfacto-w, you need to install the splatfacto-w following the instructions in link1 or link2.

Application: Visual Localization

Popular visual localization systems provide good support for COLMAP-formatted data, for example, HLoc benchmark, ACE0, Mast3r and etc. We convert our dataset to COLMAP format (.bin and .txt) to facilitate the usage of data on existing visual localization systems. Our COLMAP-formatted data can be found in aria/<scene_name>/visual_reloc/colmap. Exploring Localization with this COLMAP-formatted data is straightforward. See more details in HLoc benchmark.