Hi, I'm working with Kinect point clouds and trying to calculate the transformation between two consecutive clouds using "pcl::IterativeClosestPoint" (in ros fuerte in my case), but there seems to be a problem with the alignment in my tests; specially when there are rotations between two consecutive captures.
The problem is I don't seem to find a configuration that works fine with translations and rotations at the same time (or even with all the movements I have; I don't care about the time for the calculations) so I'd like to ask if anybody knows how to put the algorithm to find ANY of those; no matter how long it takes. Here's an example with 3 pairs of point clouds: nomovement, a translation and a rotation. Blue cloud is the estimation; it should match the red one. The right column uses ransac threshold 0.05 (default) and the one in the middle is ransac threshold=1 The rest of parameters are: _icp_max_iterations:=500 _icp_max_distance:=1.000000E+00 _icp_transf_epsilon:=1.000000E07 _icp_ransac_iterations:=1000 the iterations are a lot compared to other threads in this forum so it doesn't stop too soon and the euclidean epsilon should be as near as possible to zero, but it doesn't seem to make a difference changing the default. The code for the alignment is pretty simple but I include it in case I'm doing something wrong: Eigen::Matrix4f odometryComm::process2CloudsICP(PointCloudT::Ptr &cloud_i, PointCloudT::Ptr &cloud_tgt) { if (cloud_tgt == 0  cloud_i == 0) { ROS_INFO("Empty clouds received; returning!"); //ROS_WARN return Eigen::Matrix4f(0,0); } pcl::IterativeClosestPoint<PointT, PointT> icp; // Sets both clouds into the algorithm icp.setInputCloud(cloud_i); icp.setInputTarget(cloud_tgt); if (maxIterations > 0 && epsilon > 0 && maxDistance > 0) { icp.setMaximumIterations(maxIterations); icp.setTransformationEpsilon(epsilon); icp.setMaxCorrespondenceDistance(maxDistance); } else ROS_ERROR("The algorithm is supposed to be run with positive values..."); if (euclideanDistance > 0) icp.setEuclideanFitnessEpsilon (euclideanDistance); if (maxRansacIterations != 0) icp.setRANSACIterations (maxRansacIterations); if (ransacInlierThreshold > 0) icp.setRANSACOutlierRejectionThreshold (ransacInlierThreshold); // Run the calculations (returns nothing but copies in "Final" the //input PC (cloud_i) transformed accordingly to the result PointCloudT Final; icp.align(Final); pcl_pub.publish(Final); pcl_pub_2.publish(cloud_tgt); pcl_pub_3.publish(cloud_i); return icp.getFinalTransformation(); } I hope somebody can tell me what I am doing wrong. Thank you very much in advance!!! Miguel 
Administrator

Hi Miguel,
* Michi05 <[hidden email]> [20120721 14:58]: > Hi, I'm working with Kinect point clouds and trying to calculate the > transformation between two consecutive clouds using > "pcl::IterativeClosestPoint" (in ros fuerte in my case), but there seems to > be a problem with the alignment in my tests; specially when there are > rotations between two consecutive captures. > > The problem is I don't seem to find a configuration that works fine with > translations and rotations at the same time (or even with all the movements > I have; *I don't care about the time for the calculations*) so I'd like to > ask if anybody knows how to put the algorithm to find ANY of those; no > matter how long it takes. ICP depends on quite a number of parameters and most of the time there is no best set for all point clouds, especially for the onces from the Kinect, which are quite noisy. One option is to play around with all ICP parameters to see if you can find a matching set for your application, the other would be to try the new registration API in PCL, which is not completely documented at the moment: http://pointclouds.org/documentation/tutorials/registration_api.php Third, you can try kinfu. > Here's an example with 3 pairs of point clouds: nomovement, a translation > and a rotation. Blue cloud is the estimation; it should match the red one. > The right column uses ransac threshold 0.05 (default) and the one in the > middle is ransac threshold=1 > > http://www.pclusers.org/file/n4020648/Test__Jul21_Both_01.png > > The rest of parameters are: > _icp_max_iterations:=500 > _icp_max_distance:=1.000000E+00 Assuming that you use the default units, this means the maximal point distance is one meter, that's a lot, try decreasing it. > _icp_transf_epsilon:=1.000000E07 > _icp_ransac_iterations:=1000 > > I hope somebody can tell me what I am doing wrong. > Thank you very much in advance!!! > Miguel Cheers Jochen _______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
Administrator

Hi Miguel,
As a side note: if you do not restrict yourself to using only depth data, you could look at the visual odometry libraries that make use of the color information as well. For example, check the fovis library, which provides extremely fast and suprisingly precise motion estimations. Regards, Sergey 
In reply to this post by Jochen Sprickerhof
... pcl::PassThrough<PointT> pass;
pass.setInputCloud (ptcld); pass.setFilterFieldName ("x");
pass.setFilterLimits (x, x);
pass.filter (*temp); ptcld = temp;
... That way the initial cloud acts as a smaller pattern to be fit in the target cloud and the central points get more importance (since they are the most likely to remain in the camera range and there seems to be no weighting).
In either case, the most noticeable improvement is to do smaller movements between clouds so that's the main point!
Thanks again for your help! Miguel 2012/7/23 Jochen Sprickerhof [via Point Cloud Library (PCL) Users] <[hidden email]> Hi Miguel, Eso es todo, muchas gracias. Un saludo. Miguel Jul23  Test After Trim.png (38K) Download Attachment 
This post was updated on .
In reply to this post by Michi05
CONTENTS DELETED
The author has deleted this message.

Hi there,
Well, as I said before I gave up to calculate big movements (even with 100% of the points with only one object in the scene). Now I get acceptable results with "ransac_threshold == 0.1" and movements that are either a single linear translation (30 cm maximum) or a single rotation (8 degrees, maybe 10). Of course I'd like to improve that with the "perfect values" for the algorithm but I don't really understand its behaviour anymore :( ************************************** ((Maybe the following is a little bit silly)) but... if I had to improve this I think I would use "hints", that is: icp.align(cloud_aligned, const Eigen::Matrix4f &guess);With that overload of the "align" method you can suggest a TF so I would use very low maximum iterations and 4 different TFs: turn left, turn right, go forward and go backward. Maybe that way the algorithm wouldn't get lost but I can't try anymore 'coz I had to deliver my results already. ************************************ I'm sorry not to be of better help but, as I said, I hadn't found a better configuration but only better restrictions for the movements: 30cm, 8 degrees and ransac_threshold=0.1 (depending on the scene, of course). I hope this helps a little bit at least. Good luck!! Regards

Hi Miguel:
sorry for the delay, but I'm spending a lot of of time with the PCL and the ICP algorithm :( without good result I'm trying to align a set of captures (Capture0, Capture1.... CaptureN) Now I'm testing only with two because otherwise will take a lot of time... well, using the ICP, with this configuration double dist = 0.7; double rans = 0.9; int iter = 100; icp>setMaximumIterations (iter); icp>setMaxCorrespondenceDistance (dist); icp>setRANSACOutlierRejectionThreshold (rans); icp>setInputTarget (model); icp>setInputCloud (data); icp>align (*tmp); t = icp>getFinalTransformation () * t; pcl::transformPointCloud (*data, *tmp, t); in theory, in tmp is stored the result (the new point cloud aligned) between (model) and (data). If the tranformation procces is from (model) to (data), model doesn't change and in (data) is the aligned PCD. For check the result, I do a concatenation of both, but they aren't aligned What am I doing wrong? 
Administrator

* artv_pcl <[hidden email]> [20120827 22:25]:
> Hi Miguel: Hi Artv, > sorry for the delay, but I'm spending a lot of of time with the PCL and the > ICP algorithm :( without good result > I'm trying to align a set of captures (Capture0, Capture1.... CaptureN) Now > I'm testing only with two because otherwise will take a lot of time... > well, using the ICP, with this configuration > > double dist = 0.7; > double rans = 0.9; > int iter = 100; > > icp>setMaximumIterations (iter); > icp>setMaxCorrespondenceDistance (dist); > icp>setRANSACOutlierRejectionThreshold (rans); > > icp>setInputTarget (model); > icp>setInputCloud (data); > > icp>align (*tmp); > > t = icp>getFinalTransformation () * t; > pcl::transformPointCloud (*data, *tmp, t); Why are you doing this? Depending on the initial value of t, tmp should either have the same transformation as before (as align returns it transformed with icp>getFinalTransformation), or it will be misaligned for sure. > in theory, in tmp is stored the result (the new point cloud aligned) between > (model) and (data). > If the tranformation procces is from (model) to (data), model doesn't change > and in (data) is the aligned PCD. > For check the result, I do a concatenation of both, but they aren't aligned > > What am I doing wrong? That's really hard to say without having a look at your point clouds. Basically ICP will find an optimal solution if your point correspondences are correct and _any_ optimal solution if they are not. So the art of using iCP is to find parameters that work for your point clouds (see the archive of the list for lost of information on this). Cheers Jochen _______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
In reply to this post by artv_pcl
Hi there As Jochen well said, finding the perfect parameters is precisely the mainyettricky point of this. I can't even assure that increasing the iterations will improve the result (stunning but true; in my tests the algorithm went crazy a couple of times after too many iterations). Using usual parameters from the forum should work in the easiest cases at least... About the rest: I'd guess that 't' is the cumulative transform with all the accumulated movements, am I in the path? So you never get to know the last transformation (except for the first one, assuming that 't' is wellinitialized to the identity; which I couldn't say) and, definitely, you don't store the transformation between each pair "model", "data". But in "transformPointCloud", you store in "tmp" (which is already modofied in "align") the result of transforming "data" with the cumulative movement (not the last one). I'm pretty sure about this. Let 'ft = icp>getFinalTransformation ()" be the "final transformation" between model and data; as far as I know, "icp>align(*tmp)" stores in "tmp" the result of transforming the initial cloud through "ft". That is, in the perfect case (where 'ft' is the perfect solution) "temp" overlaps the final cloud. The only cloud being modified by the code is "tmp" so if you concatenate model and data they won't be aligned (if you mean that), if that's the problem you should compare "tmp" and the final cloud; otherwise I didn't understand the problem. By the way: for my tests, I was publishing "initialCloud" (inputCloud), "finalCloud" (inputTarget) and "tmp"(align(...)) and printing them with rviz using different colours. It was pretty easy to check because "tmp" overlapped "finalCloud" whenever the algorithm was right. Miguel 
Hi Miguel:
Sorry for the delay, but, during this days my fight with the PCL is all time :( Finally the algorithm is running :( but unfortunately sometimes the result are not the best. I think that is because of the parameters
 MaxCorrespondenceDistance  EuclideanFitnessEpsilon
 TransformationEpsilon  RANSACOutlierRejectionThreshold Do you have any idea how can I configure them to obtain the best result? Cheers...
Arturo... 2012/9/2 Michi05 <[hidden email]>
_______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
Administrator

Arturo,
There is no magic in registration algorithms like ICP. You need to first understand the theory behind it, and learn about your dataset, in order to be able to particularize it and obtain a good solution. Cheers, Radu.  http://openperception.org On 09/03/2012 01:56 AM, Arturo Quintana Torres wrote: > Hi Miguel: > Sorry for the delay, but, during this days my fight with the PCL is all time :( > Finally the algorithm is running :( but unfortunately sometimes the result are not the best. I think that is because of > the parameters > >  MaxCorrespondenceDistance >  EuclideanFitnessEpsilon >  TransformationEpsilon >  RANSACOutlierRejectionThreshold > > Do you have any idea how can I configure them to obtain the best result? > Cheers... > Arturo... > > 2012/9/2 Michi05 <[hidden email] <mailto:[hidden email]>> > > > Hi there > > As Jochen well said, finding the perfect parameters is precisely the > mainyettricky point of this. > I can't even assure that increasing the iterations will improve the result > (stunning but true; in my tests the algorithm went crazy a couple of times > after too many iterations). Using usual parameters from the forum should > work in the easiest cases at least... > About the rest: > > > artv_pcl wrote > > > > icp>setMaximumIterations (iter); > > icp>setMaxCorrespondenceDistance (dist); > > icp>setRANSACOutlierRejectionThreshold (rans); > > > > icp>setInputTarget (model); > > icp>setInputCloud (data); > > > > icp>align (*tmp); > > > > t = icp>getFinalTransformation () * t; > > pcl::transformPointCloud (*data, *tmp, t); > > > > I'd guess that 't' is the cumulative transform with all the accumulated > movements, am I in the path? > So you never get to know the last transformation (except for the first one, > assuming that 't' is wellinitialized to the identity; which I couldn't say) > and, definitely, you don't store the transformation between each pair > "model", "data". > > But in "transformPointCloud", you store in "tmp" (which is already modofied > in "align") the result of transforming "data" with the cumulative movement > (not the last one). I'm pretty sure about this. > > Let 'ft = icp>getFinalTransformation ()" be the "final transformation" > between model and data; as far as I know, "icp>align(*tmp)" stores in "tmp" > the result of transforming the initial cloud through "ft". That is, in the > perfect case (where 'ft' is the perfect solution) "temp" overlaps the final > cloud. > > > > artv_pcl wrote > > > > > > in theory, in tmp is stored the result (the new point cloud aligned) > > between (model) and (data). > > If the tranformation procces is from (model) to (data), model doesn't > > change and in (data) is the aligned PCD. > > > > The only cloud being modified by the code is "tmp" so if you concatenate > model and data they won't be aligned (if you mean that), if that's the > problem you should compare "tmp" and the final cloud; otherwise I didn't > understand the problem. > > By the way: for my tests, I was publishing "initialCloud" (inputCloud), > "finalCloud" (inputTarget) and "tmp"(align(...)) and printing them with rviz > using different colours. It was pretty easy to check because "tmp" > overlapped "finalCloud" whenever the algorithm was right. > > Miguel > > > >  > View this message in context: > http://www.pclusers.org/Rotatedpointcloudsnotaligningcorrectlytp4020648p4021934.html > Sent from the Point Cloud Library (PCL) Users mailing list archive at Nabble.com. > _______________________________________________ > [hidden email] <mailto:[hidden email]> / http://pointclouds.org > http://pointclouds.org/mailman/listinfo/pclusers > > > > > _______________________________________________ > [hidden email] / http://pointclouds.org > http://pointclouds.org/mailman/listinfo/pclusers > [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
This post was updated on .
In reply to this post by artv_pcl
Hi.
I really don't have a clue about which exact values to place there. You should be able to decrease the order of magnitude of both "epsilon" values until no improvement is done in the score... the other two parameters depend absolutely of the case. What it looks like a hint to me is the fact that your configuration works in certain situations; this makes me think about two paths:  Maybe some movements are too big... I'd say that the algorithm should work when it finds a 90% of the previous image in the new one.  What I did at some point (and it really helped) is to record a bagfile with some captures so the result is completely deterministic. With that file you can implement a loop or something that tests the same captures with different values and saves the "SCORES" just to compare. For example, if you have a "MCD" max correspondence distance, you can try with MCD/4, MCD/2, MCD, 2*MCD, 4*MCD I know it's a hell because of the amount of parameters but they depend on the scenario and the kind of movements, and the score gets stuck very soon anyway, which reduces the amount of tests needed I can only add that some of the things that screw the calculations for me were the kinds of movements (frontback are different from turn left or turn right), the amounts of points and significative points in the image (if you can only see the floor/wall or you also have two boxes in sight) and the distance to the points. I'm sorry I can't be more concrete, but I hope that those hints and criteria help you! Michi 
Free forum by Nabble  Edit this page 