I am trying to implement real-time visual odometry using the OpenCV ORB detector and FPFH descriptors, but the normal calculation time is causing a problem.
I already have ORB 2D-to-2D odometry working by matching ORB descriptors before converting them into a cloud and running RANSAC, but I'd like to try 3D feature descriptors.
The features tutorial describes calculating descriptors for a subset of points, but it seems to depend on having an existing cloud of surface normals. Using the IntegralImageNormalEstimation to create the normals is reasonably fast, but it still takes 40-50ms (with optimizations) and is not fast enough for real-time use.
Is there any way I can calculate a minimal set of normals needed to create the FPFH descriptors? I'm assuming that this would be a small subset of my original image.
For reference, here is how I'm calculating normals:
Hmm, I would have said that you could just use the inherited function setIndices(), to force it only to compute the normals at a subset of points. But looking in the source code, that class doesn't seem to be using it, like e.g. the "regular" normal estimation class does.
Therefore, I suggest you just do a computePointNormal() for the subset of points you need - that should do it.