How take pictures and align(merge) point clouds to get full 3D model?

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

How take pictures and align(merge) point clouds to get full 3D model?

aram
This post has NOT been accepted by the mailing list yet.
I want to get 3d model of some real word object.
I have two web cams and using openCV and SBM for stereo correspondence I get point cloud of the scene, and filtering through z I can get point cloud only of the object.
I want to merge point clouds of the object to get whole 3d object.
I know that ICP is good for this purprose, but it needs point clouds to be initally good aligned, so it is combined with SAC to achieve better results. But my SAC fitness score it too big smth like 70 or 40, also ICP doesn't give good results.

My questions are:
Is it ok for ICP if I just rotate the object infront of cameras for obtaining point clouds? What angle of rotation must be to achieve good results? Or maybe there are better way of taking pictures of the object for getting 3d model? Is it ok if my point clouds will have some holes? What is maximal acceptable fitness score of SAC for good ICP, and what is maximal fitness score of good ICP? Or maybe I should use some other algo for merging?


Here is example of my point cloud:
points_filtered_z2.ply

Piece of my code:
void icp()
{
        PointCloud::Ptr cloud1(new PointCloud);
        pcl::io::loadPLYFile("point clouds/points_filtered_z2.ply", *cloud1);
        PointCloud::Ptr cloud2(new PointCloud);
        pcl::io::loadPLYFile("point clouds/points_filtered_z3.ply", *cloud2);
        cout<<"point: "<<cloud1->points.size()<<endl;

        ////cloud1 = downsample(cloud1, 0.1);
        //cout<<"point: "<<cloud2->points.size()<<endl;
        PointCloudNormal::Ptr normals1 = getNormals(cloud1);
        PointCloudNormal::Ptr normals2 = getNormals(cloud2);
        PCFPFHSignature33::Ptr feautures1 = getFeaturesFPFH(cloud1, normals1, 0.03);
        PCFPFHSignature33::Ptr feautures2 = getFeaturesFPFH(cloud2, normals2, 0.03);

        Eigen::Matrix4f transformation = Eigen::Matrix4f::Identity();
        pcl::SampleConsensusInitialAlignment<PointT, PointT, pcl::FPFHSignature33> sac_ia;
        sac_ia.setInputCloud( cloud1 );
        sac_ia.setSourceFeatures( feautures1 );
        sac_ia.setInputTarget( cloud2 );
        sac_ia.setTargetFeatures( feautures2 );
               
        //Set parameters for allignment and RANSAC
        sac_ia.setMaxCorrespondenceDistance(0.05);
        sac_ia.setMinSampleDistance(0.01);
        sac_ia.setMaximumIterations(200);
               
        //Allign frame using FPFH features
        sac_ia.align( *cloud1 );
        std::cout << "Done! MSE: " << sac_ia.getFitnessScore() << endl;
        //
        ////Get the transformation
        transformation = sac_ia.getFinalTransformation();

        pcl::IterativeClosestPoint<PointT, PointT> icp;
        icp.setInputCloud( cloud1 );
        icp.setInputTarget(cloud2);
               
        //Set the ICP parameters
        icp.setMaxCorrespondenceDistance(0.05);
        icp.setMaximumIterations(50);
        icp.setTransformationEpsilon(1e-8);
        //icp.setEuclideanFitnessEpsilon(ICP_EUCLIDEAN_FITNESS_EPSILON);
               
        //Refine the allignment using ICP
        icp.align( *cloud1 );
               
        std::cout << "has converged:" << icp.hasConverged() << " score: " <<
        icp.getFitnessScore() << std::endl;
        //
        //transformation = icp.getFinalTransformation() * transformation;
        ////pcl::transformPointCloud(*cloud1, *cloud1, transformation);
        //cout<<"point: "<<final->points.size()<<endl;
        pcl::visualization::CloudViewer viewer("Simple Cloud Viewer");
        *cloud1 += *cloud2;
        viewer.showCloud(cloud1);
        while (!viewer.wasStopped ())
        {
        }
}
Reply | Threaded
Open this post in threaded view
|

Re: How take pictures and align(merge) point clouds to get full 3D model?

VictorLamoine
Administrator
aram wrote
My questions are:
Is it ok for ICP if I just rotate the object infront of cameras for obtaining point clouds? What angle of rotation must be to achieve good results? Or maybe there are better way of taking pictures of the object for getting 3d model? Is it ok if my point clouds will have some holes? What is maximal acceptable fitness score of SAC for good ICP, and what is maximal fitness score of good ICP? Or maybe I should use some other algo for merging?
The angle of rotation depends on the shape, but the idea is that you need an overlap between two point clouds in order to align them correctly. A 50% overlap should always work, smaller overlaps will work/or not depending on the shape complexity (the global idea is that the more complex the shape is, the easier the aligning is because the problem is more constrained).

To improve your workflow very easily, I would suggest instrumenting the turning table with a protractor (manual or electronic) then reporting the values in the program.

What you need to do is to rotate the scan around the axis of rotation of the table (so you also need to know the axis equation in your camera frame).

Even non-accurate measurements will help you a lot aligning your point clouds (2° error is still better than 20° !)

Using the angles, you will probably be able to ditch the FPFH part of your code.

Bye
Reply | Threaded
Open this post in threaded view
|

Re: How take pictures and align(merge) point clouds to get full 3D model?

aram
VictorLamoine wrote
What you need to do is to rotate the scan around the axis of rotation of the table (so you also need to know the axis equation in your camera frame).

Even non-accurate measurements will help you a lot aligning your point clouds (2° error is still better than 20° !)

Using the angles, you will probably be able to ditch the FPFH part of your code.
What do you mean "the axis equation in your camera frame" ? If I know angle, isn't it enough just rotate my scan with this angle? I use two web cams for getting point cloud, and know translation and rotation vector between cameras.
Reply | Threaded
Open this post in threaded view
|

Re: How take pictures and align(merge) point clouds to get full 3D model?

VictorLamoine
Administrator
You have 3 origins:

#1: 3D sensor origin, this is the origin used when you get the point clouds from your sensor
#2: Turning table center of rotation (axis)
#3: Object centroid



When you rotate the objects thanks to the turning table, it rotates around #2. If you rotate the point cloud using a rotation matrix (see http://www.pointclouds.org/documentation/tutorials/matrix_transform.php) it will be a rotation around #1, which you don't want.

You have to rotate the point clouds around the #2 axis.
Reply | Threaded
Open this post in threaded view
|

Re: How take pictures and align(merge) point clouds to get full 3D model?

aram
Thank you for  detailed answer.
But sorry, I'm new in this sphere and I don't understand, how can I get the rotation around #2?
Reply | Threaded
Open this post in threaded view
|

Re: How take pictures and align(merge) point clouds to get full 3D model?

VictorLamoine
Administrator
A simple yet not very precise way to do this would be:
- Make your setup: fix the 3D sensor and turn table
- Put a cylinder representing the table rotation axis
- Scan the cylinder and segment a cylinder (see this tutorial: http://www.pointclouds.org/documentation/tutorials/cylinder_segmentation.php)
- Use the equation found to rotate your models
Reply | Threaded
Open this post in threaded view
|

Re: How take pictures and align(merge) point clouds to get full 3D model?

aram
Thank you, but it seems hard for me to construct turning table, I want to do without it.

I try manually rotate object with small angle, here is example of my dataset:
model.rar

I try with just icp, the fitness score for 2 consequent point clouds, is about 0.02-0.04
The result after merging several point clouds is not good.

I try also sac for initial alignment, but sac score is bad.
What can I do to get model of these point clouds.

I try with other point clouds datasets, but they all are similiar.
Reply | Threaded
Open this post in threaded view
|

Re: How take pictures and align(merge) point clouds to get full 3D model?

Isabel
Have you tried kinfu?

2015-10-25 23:55 GMT+01:00 aram <[hidden email]>:
Thank you, but it seems hard for me to construct turning table, I want to do
without it.

I try manually rotate object with small angle, here is example of my
dataset:
model.rar <http://www.pcl-users.org/file/n4039783/model.rar>

I try with just icp, the fitness score for 2    consequent point clouds, is
about 0.02-0.04
The result after merging several point clouds is not good.

I try also sac for initial alignment, but sac score is bad.
What can I do to get model of these point clouds.

I try with other point clouds datasets, but they all are similiar.



--
View this message in context: http://www.pcl-users.org/How-take-pictures-and-align-merge-point-clouds-to-get-full-3D-model-tp4039678p4039783.html
Sent from the Point Cloud Library (PCL) Users mailing list mailing list archive at Nabble.com.
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users





_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|

Re: How take pictures and align(merge) point clouds to get full 3D model?

aram
No, but as I understand, kinfu is for large scale clouds and for kinect data. My point clouds are generated not from kinect, and they are not large scale.
Reply | Threaded
Open this post in threaded view
|

Re: How take pictures and align(merge) point clouds to get full 3D model?

Isabel
Kinfu works with OpenNI cameras as the kinect or the asus xtion pro live. It also works for small scale (kinfu_app). Both Kinfu_app and Kinfu_largeScale have the posibility to use point clouds as input (recorded off line). I haven't tried this option, but it is there.

Cheers,

2015-10-26 11:33 GMT+01:00 aram <[hidden email]>:
No, but as I understand, kinfu is for large scale clouds and for kinect data.
My point clouds are generated not from kinect, and they are not large scale.



--
View this message in context: http://www.pcl-users.org/How-take-pictures-and-align-merge-point-clouds-to-get-full-3D-model-tp4039678p4039789.html
Sent from the Point Cloud Library (PCL) Users mailing list mailing list archive at Nabble.com.
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users





_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users