Organized Mutli Plane Segmentation and Cloud Transformations
I have a 3D camera installed on the end of a robot arm and I generally want the geometrical features in my software to be expressed in the coordinate system centered on my robot origin. Unfortunately, if I transform the points before running OrganizedMultiPlaneSegmentation, then the plane segmentation doesn't properly work. Typically this means the algorithm returns fewer planes than if I perform segmentation without first transforming, and often it means it returns no planes altogether.
I have tried various combinations of setting and unsetting the sensor position / orientation of the point cloud and the normal cloud, but this does not help. So far, the only thing that has worked for us is to segment the planes before we do the transform, but I would prefer not to do this. I can visualize the cloud and the normals and they seem correct.
I am trying to understand from the implementation and the paper documenting the implementation why transforming the data would break the algorithm. If anyone has information or a solution to my problem, please let me know.
Re: Organized Mutli Plane Segmentation and Cloud Transformations
From reading the paper, the organized multi plane segmenter requires that the cloud have a z forward orientation. Since our cloud is generated in x forward, I just transformed the cloud and the normals by 90 degrees over y in order to get z forward and everything seems to work.