SynthVPT / README.md
jwgcurrie's picture
Update README.md
2ab1545 verified
metadata
license: mit
size_categories:
  - 10K<n<100K
task_categories:
  - image-to-3d
  - robotics
  - image-feature-extraction
  - depth-estimation
  - image-to-text
  - other
modalities:
  - image
  - tabular
  - text
dataset_info:
  features:
    - name: image
      dtype: image
    - name: semantic_class
      dtype: string
    - name: transform
      dtype: string
    - name: Tx
      dtype: float32
    - name: Ty
      dtype: float32
    - name: Tz
      dtype: float32
    - name: rot_x
      dtype: float32
    - name: rot_y
      dtype: float32
    - name: rot_z
      dtype: float32
    - name: rot_w
      dtype: float32
  splits:
    - name: train
      num_bytes: 4106560699
      num_examples: 16000
    - name: validation
      num_bytes: 510934045
      num_examples: 2000
    - name: test
      num_bytes: 513367561
      num_examples: 2000
  download_size: 2568917592
  dataset_size: 5130862305
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

VPT-Synth-Objects: A Synthetic Dataset for Visual Perspective Taking

This is a proof-of-concept synthetic dataset designed for training socio-cognitive foundational models for robotics, specifically in Visual Perspective Taking (VPT). The core task is to enable a robot to infer an object's 6D pose (position and orientation) relative to another agent, given a single RGB image.

This dataset was generated using NVIDIA Isaac Sim and Omniverse Replicator. Each entry provides an image alongside the ground-truth pose of a specific object within that image. The current version contains two object classes: a simple mug and a humanoid (x-bot).


Dataset Details

Dataset Summary

Visual Perspective Taking is a challenging task that traditionally requires large amounts of precisely labelled real-world data. This dataset serves as a proof-of-concept to explore the viability of using high-fidelity synthetic data as a scalable and cost-effective alternative.

The data consists of renders of an objects (mug) placed on a tabletop in a simple scene. For each image, the dataset contains separate entries for each unique semantic object present, providing its class and exact 6D pose relative to the camera.

  • Total Examples: 20,000
  • Generator: NVIDIA Omniverse Replicator
  • Objects: mug, xbot_humanoid

Data Fields

The dataset has the following fields:

  • image: A PIL.Image.Image object containing the rendered RGB image.
  • semantic_class: A string indicating the class of the object for which the pose is provided (e.g., "mug").
  • transform: A string representing the full 4x4 transformation matrix that maps points from the camera's coordinate frame to the object's local coordinate frame.
  • Tx, Ty, Tz: The translation components (float) of the object's pose in metres, extracted from the transformation matrix.
  • rot_x, rot_y, rot_z, rot_w: The quaternion components (float) representing the rotation of the object relative to the camera.

Data Splits

The data is split into training, validation, and test sets. The splits were created based on unique images to ensure no data leakage between the sets.

Split Number of Examples
train 16,000
validation 2,000
test 2,000

How to Use

You can load and use the dataset with the datasets library as follows:

from datasets import load_dataset

# Load the dataset from the Hugging Face Hub
dataset = load_dataset("jwgcurrie/SynthVPT")

# Access an example from the training set
example = dataset['train'][42]

image = example['image']
semantic_class = example['semantic_class']
translation_vector = [example['Tx'], example['Ty'], example['Tz']]
rotation_quaternion = [example['rot_x'], example['rot_y'], example['rot_z'], example['rot_w']]

print(f"Object Class: {semantic_class}")
print(f"Translation (m): {translation_vector}")
print(f"Rotation (quaternion): {rotation_quaternion}")

# To display the image (e.g., in a Jupyter notebook)
# image.show()