Quantcast

point cloud fusion

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

point cloud fusion

Domididongo
I took several images with the Kinect one camera. I have got registered rgb images and depth images gathered from the raw depth value for a sequence of images.

Now I am searching to calculate the camera position for each images for a depth fusion application to reconstruct a model out our the gathered images.

For the camera position I used the same algorithm as shown in the documentary from  PCL

For the moment I am always aligning an images at sequence i with to the images of sequence i-1. I have several problems with that, because sometime some images have some errors or to much missing data. And because of that all newer images are working on this error.

My new idea is to use a point cloud as scene and try to align the new point cloud gathered from the images of sequence i to that scene for each step.

My question now is:
How can I easily add good/valuable points to the point cloud 'scene'?
Does this 'object_aligned' contain both point clouds already aligned together? or does it only hold the Input Source point cloud with the alignment to the target?

I actually am trying now to perform the alignment not with a sequence of images gathered from a live stream. I want to take a small amount of images (15-20) and align does. So I would have got a bigger translation, and rotation between does images.

Can someone give me a advice how the parameters of these methods could be set or tested to get good results?
I only saw a difference on 'setMaximumIterations'.
Loading...