The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 83, in _split_generators raise ValueError( ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation
Sherwin Bahmani,
Tianchang Shen,
Jiawei Ren,
Jiahui Huang,
Yifeng Jiang,
Haithem Turki,
Andrea Tagliasacchi,
David B. Lindell,
Zan Gojcic,
Sanja Fidler,
Huan Ling,
Jun Gao,
Xuanchi Ren
Dataset Description:
The PhysicalAI-SpatialIntelligence-Lyra-SDG Dataset is a multi-view 3D and 4D dataset generated using GEN3C. The 3D reconstruction setup uses 59,031 images, while the 4D setup has 7,378 videos. All the data are from diverse text prompts, spanning various scenarios such as indoor and outdoor environments, humans, animals, and both realistic and imaginative content. We synthesize 6 camera trajectories for each image (3D) or video (4D), yielding 354,186 videos for the 3D and 44,268 videos for the 4D. It contains videos in RGB and camera poses and depth of the videos.
This dataset is ready for commercial use.
Dataset Owner(s):
NVIDIA Corporation
Dataset Creation Date:
2025/09/23
License/Terms of Use:
Visit the NVIDIA Legal Release Process for instructions on getting legal support for a license selection: https://docs.google.com/spreadsheets/d/1e1K8nsMV9feowjmgXhdfa0qo-oGJNlnsBc1Qhwck7vU/edit?usp=sharing
Intended Usage:
Researchers and academics working in spatial intelligence problems can use it to train AI models for multi-view video generation or reconstruction.
Dataset Characterization:
** Data Collection Method
[Synthetic]
** Labeling Method
[Synthetic]
Dataset Format:
RGB in mp4, Camera pose in .npz, Depth in zip format
Dataset Quantification:
The 3D reconstruction setup has 59,031 multi-view examples, while the 4D setup has 7,378 multi-view examples. For each multi-view example, we have 6 views. For each view, we have videos in Red, Green, Blue (RGB) and camera poses and depth of the videos.
Field | Format |
---|---|
Video | mp4 |
Camera pose | .npz |
Depth | .zip |
Storage: 25TB
Reference(s):
Please refer to https://github.com/nv-tlabs/lyra for how to use this dataset.
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.
Citation
@inproceedings{bahmani2025lyra,
title={Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation},
author={Bahmani, Sherwin and Shen, Tianchang and Ren, Jiawei and Huang, Jiahui and Jiang, Yifeng and
Turki, Haithem and Tagliasacchi, Andrea and Lindell, David B. and Gojcic, Zan and Fidler, Sanja and
Ling, Huan and Gao, Jun and Ren, Xuanchi},
booktitle={arXiv preprint arXiv:2509.19296},
year={2025}
}
@inproceedings{ren2025gen3c,
title={GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control},
author={Ren, Xuanchi and Shen, Tianchang and Huang, Jiahui and Ling, Huan and
Lu, Yifan and Nimier-David, Merlin and Müller, Thomas and Keller, Alexander and
Fidler, Sanja and Gao, Jun},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025}
}
- Downloads last month
- 628