I'm working real time object recognition with Kinect 2. I successfully run a
code, but problem is in rotation and translation of model. Normal
computation in PCL depends on viewpoint settings, I tried to play with
setViewPoint(x, y, z) option, but without consistent results. That mean if I
move the object from position (0, 0, 0) to position (0.5, 0.5, 0.5) there
are much more less correspondence because desriptors are not a same anymore
because of different normals around object.
I tried to find solution on few forums and webs, but nothing.
Is there any way to solve this problem in PCL?
I have few ideas, for example double or triple registration to lower pose
estimation error after every registration, but I want to hear your opinion.