I would like to run the KFPCSAlignment and the PPFRegistration algorithmes to register PointClouds bigger than 100K points but both of them already crash with 100K points.
Concerning the PPFRegistration, it looks like the task needs to much memory on my computer (> 200Go) so I guess the problem come from the algorithme itself. Can anyone confirm me that I haven't done any mistakes in my code and that it's not an implementation problem ?
Concerning the KFPCSAlignment, I got the following error :
terminate called after throwing an instance of 'std::bad_alloc'
The error comes from the "computeTransformation()" function called during the alignment. Does anyone understand why I'm getting this error ?
The code I used for both tests comes from the pcl unit tests and works fine with small PointClouds.
I already answered you directly to your email, but for completeness, I will post the answer here as well.
Regarding your problem using K-4PCS - namely the memory consumption and as described in your email the bad results – there seems to be a basic usage problem and some parameter issues.
K-4PCS intends to register pointclouds represented by keypoints - BUT the keypoints need to be generated BEFOREHAND and PASSED as input to the K-4PCS registration class. The keypoint extraction is not part of the PCL K-4PCS implementation. The suggested workflow to extract useful keypoints is:
- In case of static scans, first apply a voxel-grid (or uniform-sampling) filter. The leaf size (sampling distance) should thereby be set to keep as much information as possible while making the point density more uniform within the region of interest (i.e., overlapping area). In my scenarios (mid range laserscans), leaf sizes between 0.01m and 0.5m worked well. If your input pointcloud is already quite uniform, you can skip this.
- Extract 3D keypoints from the (uniformed) pointcloud. The goal of this step is to strongly reduce the pointcloud while keeping points which are by their nature well suited for registration. You should try to extract minimally 1K and maximally 10K keypoints (if it is an easy scenery, less are also okay; if it is a very complex scenery, a couple more are also okay, but try to avoid it, you already realized that the algorithm becomes super slow with more than 20K points).
- Note, both steps can easily be accomplished using PCL methods (e.g. use VoxelGrid and SIFTKeypoint).
Finally, let’s have a look at the user parameters:
- Make sure your test PC is good enough to handle 10 parallel threads - it does not make sense to use more threads than available CPU cores + the more threads, the higher the memory consumption.
- Estimated overlap of 0.8 (= 80%) sounds good
- The delta value is quite critical (but sadly not very well documented, neither for 4PCS nor for K-4PCS). “Delta” can be used as a relative value which is then further used as a weight for calculating further parameters such as expected distance to neighbors (as for example in the standard 4PCS). For K-4PCS it should not and cannot be set as a relative value, because the keypointcloud density is not a useful value to derive further parameters from (e.g. because keypoints can be quite clustered). Hence “Delta” needs to be set absolute (in meters). The good news is, it still makes sense to set it according to the density of the (uniformed) pointcloud which served as basis for the keypoint extraction, because this value is a good estimation of the keypoint “accuracy”. If you have used a voxelgrid filter, the leaf size can directly be used as “Delta” (e.g. 0.05m), otherwise you should calculate the approx. pointcloud density yourself if you do not know it in advance (there are PCL methods for it, e.g. in the 4PCS implementation).
- The score threshold of 0.15 is okay. If not set by the user, it is automatically set to 1 – estimated overlap (which would be 0.2 in your case). If you don’t want to early terminate, set the value to 0.
I hope these hints help you and I would be interested to hear about your results