--- pretty_name: shapesplat size_categories: - 10K 0.001) & (depth_linear < depth_max - 0.001) background_mask = depth_img > 0.999 valid_mask = valid_mask & ~background_mask ``` # Coordinate Alignment to 3DGS and OBJ Mesh Due to coordinate inconsistency at the beginning, the poses of the 2D renderings saved in `frame['transform_matrix']` is not aligned to the world coordinate of the 3DGS object and the OBJ mesh. The following process are needed to convert the 'transform_matrix' key in order to align with the OBJ object mesh. ``` def convert_cam_coords(transform_matrix): P = np.array([ [1, 0, 0, 0], [0, 0, 1, 0], [0, -1, 0, 0], [0, 0, 0, 1] ]) C = np.array([ [1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1] ]) new_transform_matrix = P @ transform_matrix @ C return new_transform_matrix transform_matrix = np.array(frame['transform_matrix']) transform_matrix = convert_cam_coords(transform_matrix) ``` After this process, the 2D rendering results will be aligned to the world coordinate as in the original shapenet object, i.e., point_cloud.obj file. For example, the fused depth maps with the processed poses with its point_cloud.obj together: Image Additionally, there is misalignment between the released 3DGS object and the corresponding point_cloud.obj file. To align the 2D rendering results with the released 3DGS, please use the following conversion instead: ``` # align to the 3dgs object coordinates def convert_cam_coords(transform_matrix): C = np.array([ [1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1] ]) new_transform_matrix = transform_matrix @ C return new_transform_matrix ```