Clarification Needed on Coordinate Systems & Transformations for EE-Pose Projection

#34
by KY97 - opened

Hi AgiBot team and community,

Firstly, thank you for this excellent dataset! We are currently working on verifying the camera intrinsics and extrinsics by projecting the robot's end-effector Tool Center Point (TCP) onto the various camera images (head, hand_left, hand_right).

We've been carefully following guidance from previous discussions (like BobXie's insights) and the provided URDF files. However, we're encountering discrepancies in our visualization results and would be very grateful for your clarification on the transformation chain.

Our Goal:
To accurately project the robot's end-effector TCP onto all camera views to validate the calibration data.

Our Current Understanding of the Transformation Chain:

Our process transforms the TCP pose from its local frame (relative to the end-effector flange) to the pixel coordinates of each camera. Here's our step-by-step understanding:

End-Effector Flange to TCP (T_flange2tcp):

We start with the ee_state (e.g., from observation/robot_state/cartesian_position), which we assume is the pose of the arm's end-effector flange (e.g., Link7_l in the URDF).
To get to the gripper's TCP, we've derived a transformation T_flange2tcp based on the gripper_center_joint in G1_120s_dual.urdf. This involves a rotation around the Z-axis by -90 degrees, followed by a translation of +0.23m along the new Z-axis (of the TCP frame relative to the flange frame).
Robot Base to World (T_base2world):

We assume the ee_state (and thus the flange pose) is relative to the robot's base coordinate system (base_link).
To transform it to the world frame (in which camera extrinsics are presumably defined), we apply T_base2world.
Currently, we are using an identity matrix for T_base2world, assuming the base and world frames are coincident.
World to Camera (T_world2cam):

Based on previous discussions, we understand the provided camera extrinsic data (rotation R and translation t) represents T_cam2world (i.e., the camera's pose in the world frame).
To get the required T_world2cam, we calculate the inverse: R_world2cam = R.T and t_world2cam = -R.T @ t.
Our Observations:

Implementing the above logic yields the following:

Head Camera View:

We can see the projected TCPs for both arms.
However, there's a significant and consistent offset : the projected points appear noticeably below the actual grippers in the image. This suggests a systematic error.
Hand Camera Views (hand_left, hand_right):

We are unable to see any projected points.
Our debugging indicates that the extrinsic data (head_extrinsic_params_aligned.json equivalent for hand cameras) is often None or not in the expected dictionary format for these views, causing our projection code to skip them.
Our Questions:

To resolve these discrepancies, could you please clarify:

EE-Pose Coordinate System: In which coordinate system is the ee_state (from observation/robot_state/cartesian_position and orientation) defined? Is it the robot base frame (base_link) or directly in the world frame ?
Base vs. World Frame Definition & Transformation: Is there a defined transformation ( T_base2world ) between the robot's base_link frame and the world frame in which the camera extrinsics are defined? If so, what is it? Our current assumption of an identity matrix for T_base2world might be contributing to the observed offsets.
Camera Extrinsic Definition (Confirmation): Could you please confirm that the provided camera extrinsic data (e.g., from head_extrinsic_params_aligned.json) indeed represents T_cam2world (camera's pose in the world)? Our inversion to get T_world2cam seems logical if this is the case, but the offsets make us want to double-check.
Hand Camera Extrinsics Validity: Is the extrinsic data for the hand-mounted cameras expected to be consistently valid and available in all recorded frames? We frequently find this data missing or invalid.
Any clarification on these points would be immensely helpful for our work and likely beneficial for other users in the community.

Thank you for your time and support!

Sign up or log in to comment