I'm currently trying to use FPFH to make some 3D matching.
For preliminaries test, I want to test FPFH for simple tranformation (translation and rotation). For such thing, I created for every object of my reference dataset (mostly Stanford, Princeton, etc.) translated and rotated version.
Then i use ISS to get the keypoints, apply FPFH to them to get the features and finally compare for each transformed object to which object of the reference dataset it's the closest.
I get great result with rotation (90% of the object get a good match) whereas only 20% of them are ok when they are only translated.
Do you have any idea why the algorithm is so bad on translation?
I also tried other transformations (noise, occlusion, and ressampling): the results are good and can be compare to SHOT algorithm. The issue is only on the translation with FPFH.
For the implementation, i "re-used" the code from the tutorial on FPFH.
I normalize the object to 100 so i use radius 2 to computes normals and 4 for the features.
I can post the code if you need it (need to "clean" before xD).