I am scanning an object on a rotating table. I have 73 point clouds, from 00 to 72, spanning 360 degrees circle in 5-degree steps. I use ICP to match these clouds, and get decent results, the error between each pair of neighbors is very small (less than 1 degree), including between the cloud 72 (last in the loop) and cloud 00, the two clouds where I assume a loop closure event. However, when I compute the coordinates of the transformed clouds in the reference frame, I see that there is in fact approx. a 15 degrees rotational difference between cloud 72 and cloud 00. Therefore I decided to use LUM, and hoped that the (erroneous) difference of 15 degrees that has been accumulated during pairwise ICP, would be evenly spread through all the pairs. But it doesn't happen, LUM does not converge.
Prior to adding point clouds to the LUM object, I transform all the clouds to the global common reference frame with a transform found by ICP.
Then I do
lum.setCorrespondences(step-1, step, corr);
for all steps from 0 to 72, and
lum.setCorrespondences(72, 0, corr);
for the last pair of clouds.
Does anyone have ideas, why LUM doesn't work here? I get huge errors both in rotation and translation components for all poses. Can it be due to the fact that LUM assumes that pose difference among graph vertices is evenly or normally distributed? Because while I do have very small errors for each pair 00-01, 01-02, ... 71-72, of *similar magnitude*, I have a large angular difference between 72 and 00... Or maybe the problem is related to the fact that there is 15-20 degrees difference between images 72 and 00, and it breaks the small-angle assumption and makes the linearization trick that is at the heart of LUM inappropriate? What can be done to improve the convergence? I have some overlap in the data, so would adding more links between LUM vertices help here? (I doubt so).
In general, what is the maximal accumulated angular error that LUM can deal with?