Datasets:

Modalities:
Video
ArXiv:
Libraries:
Datasets
License:
TaiMingLu's picture
Update README.md
47a59d2 verified
metadata
dataset_info:
  features:
    - name: video
      dtype: video
  splits:
    - name: view
      num_examples: 4
    - name: realistic
      num_examples: 3700
    - name: low_texture
      num_examples: 8400
    - name: anime
      num_examples: 900
    - name: real_world
      num_examples: 2400
configs:
  - config_name: default
    data_files:
      - split: view
        path: view/*.mp4
      - split: realistic
        path: Realistic/*.mp4
      - split: low_texture
        path: Low-Texture/*.mp4
      - split: anime
        path: Anime/*.mp4
      - split: real_world
        path: Real-World/*.mp4
size_categories:
  - 10K<n<100K
license: cc-by-4.0

GenEx-DB-World-Exploration 🎬🌍

This is the video version of the GenEx-DB dataset.

The dataset contains forward navigation path, captured by panoramic cameras. Each path is 0.4m/frame, 50 frames in total. Each example is a single .mp4 video reconstructed from the original frame folders.

πŸ“‚ Splits

Split Name Description
realistic πŸ“Έ Unreal 5 City Sample renders
low_texture 🏜️ Blender Low-texture synthetic scenes
anime 🌸 Unity Stylized/anime scenes
real_world πŸŽ₯ JHU campus handheld collected real-world clips

πŸ—οΈ Structure

Genex-DB-Video/
β”œβ”€β”€ low_texture/
β”‚   β”œβ”€β”€ video001.mp4
β”‚   └── …   
β”œβ”€β”€ realistic/
β”‚   └── …
β”œβ”€β”€ anime/
β”‚   └── …
└── real_world/
    └── …

Each file is named <video_id>.mp4 and contains 50 (or 97 for real_world) frames at 10 FPS.

πŸš€ Usage

from datasets import load_dataset

# Load the β€œanime” split
ds = load_dataset("videofolder", data_dir="genex-world/Genex-DB-World-Exploration", split="anime")

# Inspect one example
example = ds[0]
print(example["video"].shape)  # (num_frames, height, width, 3)

✨ BibTex

@misc{lu2025genexgeneratingexplorableworld,
      title={GenEx: Generating an Explorable World}, 
      author={Taiming Lu and Tianmin Shu and Junfei Xiao and Luoxin Ye and Jiahao Wang and Cheng Peng and Chen Wei and Daniel Khashabi and Rama Chellappa and Alan Yuille and Jieneng Chen},
      year={2025},
      eprint={2412.09624},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.09624}, 
}