Regarding the issue of aligning sample data end position and depth map

#33
by heealenrens - opened

Thank you very much for such high-quality open-source data work. I am using your sample data to write a processing program. I need to align the point cloud obtained from the depth map with the position of the end effector. I followed the formula below for the transformation: /state/end/ obtained from proprioception is end_xyz, and the original point cloud obtained after backprojecting the depth picture is cam_P. I performed ''world_P = R@cam_P + T'' on cam_P, where R and T come from the camera's extrinsic parameters. According to my understanding, end_xyz should coincide with the end effector in the transformed depth point(world_P), but this is not the case.

Furthermore, I noticed that the sample data does not provide alignment extrinsics. I would like to ask if this is the reason for the problem? If not, are there other reasons? I am testing using task_362, episode_id 649657, frame 581 from the sample data.

Sign up or log in to comment