The dataset viewer is not available for this subset.
Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 39f2fa88-06d8-46c5-b4c3-eb870284c993)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 353, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 278, in get_dataset_config_info builder = load_dataset_builder( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1781, in load_dataset_builder dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1620, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1018, in get_module data_files = DataFilesDict.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 690, in from_patterns else DataFilesList.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 593, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 507, in _get_origin_metadata return thread_map( File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1169, in __iter__ for obj in iterable: File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator yield fs.pop().result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 446, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 486, in _get_single_origin_metadata resolved_path = fs.resolve_path(data_file) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 39f2fa88-06d8-46c5-b4c3-eb870284c993)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Description:
PhysicalAI-Robotics-Manipulation-SingeArm is a collection of datasets of automatic generated motions of a Franka Panda robot performing operations such as block stacking, opening cabinets and drawers. The dataset was generated in IsaacSim leveraging task and motion planning algorithms to find solutions to the tasks automatically [1, 3]. The environments are table-top scenes where the object layouts and asset textures are procedurally generated [2].
This dataset is available for commercial use.
Dataset Contact(s):
Fabio Ramos ([email protected])
Anqi Li ([email protected])
Dataset Creation Date:
03/18/2025
License/Terms of Use:
cc-by-4.0
Intended Usage:
This dataset is provided in LeRobot format and is intended for training robot policies and foundation models.
Dataset Characterization
Data Collection Method
- Automated
- Automatic/Sensors
- Synthetic
The dataset was generated in IsaacSim leveraging task and motion planning algorithms to find solutions to the tasks automatically [1]. The environments are table-top scenes where the object layouts and asset textures are procedurally generated [2].
Labeling Method
- Not Applicable
Dataset Format
Within the collection, there are six datasets in LeRobot format panda-stack-wide
, panda-stack-platforms
, panda-stack-platforms-texture
, panda-open-cabinet-left
, panda-open-cabinet-right
and panda-open-drawer
.
panda-stack-wide
The Franka Panda robot picks up the red block and stacks it on top of the green block.- action modality: 7d, 6d relative end-effector motion + 1d gripper action
- observation modalities:
observation.state
: 53d, including proprioception (robot joint position, joint velocity, end-effector pose) and object posesobservation.images.world_camera
: 512x512 world camera output stored as mp4 videosobservation.images.hand_camera
: 512x512 wrist-mounted camera output stored as mp4 videos
panda-stack-platforms
The Franka Panda robot picks up a block and stacks it on top of another block in a table-top scene with randomly generated platforms.- action modality: 7d, 6d relative end-effector motion + 1d gripper action
- observation modalities:
observation.state
: 81d, including proprioception (robot joint position, joint velocity, end-effector pose) and object posesobservation.images.world_camera
: 512x512 world camera RGB output stored as mp4 videosobservation.images.hand_camera
: 512x512 wrist-mounted camera RGB output stored as mp4 videos
panda-stack-platform-texture
The Franka Panda robot picks up a block and stacks it on top of another block in a table-top scene with randomly generated platforms and random table textures.- action modality: 8d, 7d joint motion + 1d gripper action
- observation modalities:
observation.state
: 81d, including proprioception (robot joint position, joint velocity, end-effector pose) and object posesobservation.images.world_camera
: 512x512 world camera RGB output stored as mp4 videosobservation.images.hand_camera
: 512x512 wrist-mounted camera RGB output stored as mp4 videosobservation.depths.world_camera
: 512x512 world camera depth output stored as mp4 videos, where 0-255 pixel value linearly maps to depth of 0-6 mobservation.depths.hand_camera
: 512x512 wrist-mounted camera depth output stored as mp4 videos, where 0-255 pixel value linearly maps to depth of 0-6 m
panda-open-cabinet-left
The Franka Panda robot opens the top cabinet of a randomly generated cabinet from left to right.- action modality: 8d, 7d joint motion + 1d gripper action
- observation modalities:
observation.state
: 81d, including proprioception (robot joint position, joint velocity, end-effector pose) and object posesobservation.images.world_camera
: 512x512 world camera RGB output stored as mp4 videosobservation.images.hand_camera
: 512x512 wrist-mounted camera RGB output stored as mp4 videosobservation.depths.world_camera
: 512x512 world camera depth output stored as mp4 videos, where 0-255 pixel value linearly maps to depth of 0-6 mobservation.depths.hand_camera
: 512x512 wrist-mounted camera depth output stored as mp4 videos, where 0-255 pixel value linearly maps to depth of 0-6 m
panda-open-cabinet-right
The Franka Panda robot opens the top cabinet of a randomly generated cabinet from right to left.- action modality: 8d, 7d joint motion + 1d gripper action
- observation modalities:
observation.state
: 81d, including proprioception (robot joint position, joint velocity, end-effector pose) and object posesobservation.images.world_camera
: 512x512 world camera RGB output stored as mp4 videosobservation.images.hand_camera
: 512x512 wrist-mounted camera RGB output stored as mp4 videosobservation.depths.world_camera
: 512x512 world camera depth output stored as mp4 videos, where 0-255 pixel value linearly maps to depth of 0-6 mobservation.depths.hand_camera
: 512x512 wrist-mounted camera depth output stored as mp4 videos, where 0-255 pixel value linearly maps to depth of 0-6 m
panda-open-drawer
The Franka Panda robot opens the top drawer of a randomly generated cabinet.- action modality: 8d, 7d joint motion + 1d gripper action
- observation modalities:
observation.state
: 81d, including proprioception (robot joint position, joint velocity, end-effector pose) and object posesobservation.images.world_camera
: 512x512 world camera RGB output stored as mp4 videosobservation.images.hand_camera
: 512x512 wrist-mounted camera RGB output stored as mp4 videosobservation.depths.world_camera
: 512x512 world camera depth output stored as mp4 videos, where 0-255 pixel value linearly maps to depth of 0-6 mobservation.depths.hand_camera
: 512x512 wrist-mounted camera depth output stored as mp4 videos, where 0-255 pixel value linearly maps to depth of 0-6 m
Dataset Quantification
Record Count
panda-stack-wide
- number of episodes: 10243
- number of frames: 731785
- number of RGB videos: 20486 (10243 from world camera, 10243 from hand camera)
panda-stack-platforms
- number of episodes: 17629
- number of frames: 1456899
- number of RGB videos: 35258 (17629 from world camera, 17629 from hand camera)
panda-stack-platforms-texture
- number of episodes: 6303
- number of frames: 551191
- number of RGB videos: 12606 (6303 from world camera, 6303 from hand camera)
- number of depth videos: 12606 (6303 from world camera, 6303 from hand camera)
panda-open-cabinet-left
- number of episodes: 1512
- number of frames: 220038
- number of RGB videos: 3024 (1512 from world camera, 1512 from hand camera)
- number of depth videos: 3024 (1512 from world camera, 1512 from hand camera)
panda-open-cabinet-right
- number of episodes: 1426
- number of frames: 224953
- number of RGB videos: 2852 (1426 from world camera, 1426 from hand camera)
- number of depth videos: 2852 (1426 from world camera, 1426 from hand camera)
panda-open-drawer
- number of episodes: 1273
- number of frames: 154256
- number of RGB videos: 2546 (1273 from world camera, 1273 from hand camera)
- number of depth videos: 2546 (1273 from world camera, 1273 from hand camera)
Total storage: 15.2 GB
Reference(s):
[1] @inproceedings{garrett2020pddlstream,
title={Pddlstream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning},
author={Garrett, Caelan Reed and Lozano-P{\'e}rez, Tom{\'a}s and Kaelbling, Leslie Pack},
booktitle={Proceedings of the international conference on automated planning and scheduling},
volume={30},
pages={440--448},
year={2020}
}
[2] @article{Eppner2024,
title = {scene_synthesizer: A Python Library for Procedural Scene Generation in Robot Manipulation},
author = {Clemens Eppner and Adithyavairavan Murali and Caelan Garrett and Rowland O'Flaherty and Tucker Hermans and Wei Yang and Dieter Fox},
journal = {Journal of Open Source Software}
publisher = {The Open Journal},
year = {2024},
Note = {\url{https://scene-synthesizer.github.io/}}
}
[3] @inproceedings{curobo_icra23,
author={Sundaralingam, Balakumar and Hari, Siva Kumar Sastry and
Fishman, Adam and Garrett, Caelan and Van Wyk, Karl and Blukis, Valts and
Millane, Alexander and Oleynikova, Helen and Handa, Ankur and
Ramos, Fabio and Ratliff, Nathan and Fox, Dieter},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
title={CuRobo: Parallelized Collision-Free Robot Motion Generation},
year={2023},
volume={},
number={},
pages={8112-8119},
doi={10.1109/ICRA48891.2023.10160765}
}
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.
- Downloads last month
- 821