Quantcast

Merging two point clouds obtained from different angles

classic Classic list List threaded Threaded
16 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Merging two point clouds obtained from different angles

octmr
Hello. We are trying to build a 3d scanner. We use structured light to get the point cloud of the surface, then rotate the sample around a known angle and repeat the scanning. This way, we get a (big) number of point clouds that have different amounts of points and are sampled from different angles.

We now want to merge them using PCL. Unfortunately, the ICP algorithm seems to need two subsets of the point clouds that have the same amount of points. We don't know, how to select these. Is there any algorithm that would suit our problem best? (Note: We _know_ the angle around which the rotation happened, but we don't know the poisition of the rotation axis.) It might help, that we rotate around small angles, around 2-5°. It would be very nice to get the position of the rotation axis as a result as well.

Thank you in advance,
Jan Oliver Oelerich
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

Jochen Sprickerhof
Administrator
Hi Jan Oliver,

* octmr <[hidden email]> [2011-09-16 03:15]:
> Hello. We are trying to build a 3d scanner. We use structured light to get
> the point cloud of the surface, then rotate the sample around a known angle
> and repeat the scanning. This way, we get a (big) number of point clouds
> that have different amounts of points and are sampled from different angles.
>
> We now want to merge them using PCL. Unfortunately, the ICP algorithm seems
> to need two subsets of the point clouds that have the same amount of points.

why do you think so? pcl::IterativeClosestPoint takes two point clouds
and as long as it finds enough point pairs it computes a transformation.
They definitely don't need to be of the same size.

> We don't know, how to select these. Is there any algorithm that would suit
> our problem best? (Note: We _know_ the angle around which the rotation
> happened, but we don't know the poisition of the rotation axis.) It might
> help, that we rotate around small angles, around 2-5°. It would be very nice
> to get the position of the rotation axis as a result as well.

hm.. that doesn't really help, you could evaluate the transformation of
ICP afterwards and refuse it if the angle is wrong. If you could
estimate the rotation axis (maybe you use some kind of a rotation disk)
you could give it as an initial estimate to ICP.

> Thank you in advance,
> Jan Oliver Oelerich

Cheers,

Jochen
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

Martin Bertsche
Hi,

I have been thinking about the same issue for some time now. As Jochen
said ICP, does not need equally sized Point clouds. However it seems
that it always tries to match as many points as possible and does not
throw away any of
them in order to get a better match result. This is perfectly plausible
since the optimal result would be then to discard all correspondences
between the two point clouds an hence get an optimal distance of zero.

In your case you will always have the rotational axis is unknown due to
the 2.5D nature of your data. So Transformation estimation will be a
game of chance. ICP however has two possible starting points. One is an
initial Transformation, the other is an initial correspondence vector.
Due to the nature of your setup you only want to take certain parts of
the source and target clouds (the ones that overlap) and match these parts.

So what I thought of was to use some local descriptor from the features
library in order to find correspondences between the two point clouds
that should mostly lie inside the overlapping parts. The quality of
course strongly depends on the shape of your object. Using a kinect i
plan to use PointXYZRGB to get better correspondences. After feeding
only the two overlapping parts to ICP I would apply the transformation
to the whole point cloud and add it to the first one. a. s. o. ...

I would be very happy if you could try this and maybe supply me with an
implementation. For me it is not such a big issue right now because VFH
seems to be sufficient for my purposes. However my fellow students could
use such data for manipulator path planning.

Regards

Martin

On 16.09.2011 12:36, Jochen Sprickerhof wrote:

> Hi Jan Oliver,
>
> * octmr<[hidden email]>  [2011-09-16 03:15]:
>> Hello. We are trying to build a 3d scanner. We use structured light to get
>> the point cloud of the surface, then rotate the sample around a known angle
>> and repeat the scanning. This way, we get a (big) number of point clouds
>> that have different amounts of points and are sampled from different angles.
>>
>> We now want to merge them using PCL. Unfortunately, the ICP algorithm seems
>> to need two subsets of the point clouds that have the same amount of points.
> why do you think so? pcl::IterativeClosestPoint takes two point clouds
> and as long as it finds enough point pairs it computes a transformation.
> They definitely don't need to be of the same size.
>
>> We don't know, how to select these. Is there any algorithm that would suit
>> our problem best? (Note: We _know_ the angle around which the rotation
>> happened, but we don't know the poisition of the rotation axis.) It might
>> help, that we rotate around small angles, around 2-5°. It would be very nice
>> to get the position of the rotation axis as a result as well.
> hm.. that doesn't really help, you could evaluate the transformation of
> ICP afterwards and refuse it if the angle is wrong. If you could
> estimate the rotation axis (maybe you use some kind of a rotation disk)
> you could give it as an initial estimate to ICP.
>
>> Thank you in advance,
>> Jan Oliver Oelerich
> Cheers,
>
> Jochen
> _______________________________________________
> [hidden email] / http://pointclouds.org
> http://pointclouds.org/mailman/listinfo/pcl-users
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Martin Bertsche
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

Jochen Sprickerhof
Administrator
* Martin Bertsche <[hidden email]> [2011-09-16 13:45]:
> Hi,

Hi Martin,

> I have been thinking about the same issue for some time now. As
> Jochen said ICP, does not need equally sized Point clouds. However
> it seems that it always tries to match as many points as possible
> and does not throw away any of
> them in order to get a better match result. This is perfectly
> plausible since the optimal result would be then to discard all
> correspondences between the two point clouds an hence get an optimal
> distance of zero.

You can change the distance parameter (setMaxCorrespondenceDistance) to
adapt this behaviour. Also you can use Outlier Rejection
(setRANSACOutlierRejectionThreshold) to improve your results. If you
know the overlapping part of two point clouds, you can match only these
as well, but ICP will not blindly use all points.

> In your case you will always have the rotational axis is unknown due
> to the 2.5D nature of your data. So Transformation estimation will
> be a game of chance. ICP however has two possible starting points.
> One is an initial Transformation, the other is an initial
> correspondence vector. Due to the nature of your setup you only want
> to take certain parts of the source and target clouds (the ones that
> overlap) and match these parts.

I don't know the exact setup Jan Oliver uses, but I guess he gets 3D
point clouds out of it that overlap partly. I wouldn't say that
transformation estimation is a game of chance than, given correct
correspondences ICP will compute a transformation within the accuracy of
your sensor. Regarding the starting points, ICP by default assumes no
transformation and will converge into an optimal transformation by
finding new correspondences, so the only starting point is a arbitrary
transformation (if you would have the "correct" correspondences, you can
compute a transformation in one shot).
For the parts of the point clouds to take it depends on the setup as
well. If your setup senses the background of the sample as well, you
obviously want to remove it because most probably it undermine the
matching process. On the other hand, if you would be able to tell witch
parts overlap, registration would be done ;).

> So what I thought of was to use some local descriptor from the
> features library in order to find correspondences between the two
> point clouds that should mostly lie inside the overlapping parts.
> The quality of course strongly depends on the shape of your object.
> Using a kinect i plan to use PointXYZRGB to get better
> correspondences. After feeding only the two overlapping parts to ICP
> I would apply the transformation to the whole point cloud and add it
> to the first one. a. s. o. ...
>
> I would be very happy if you could try this and maybe supply me with
> an implementation. For me it is not such a big issue right now
> because VFH seems to be sufficient for my purposes. However my
> fellow students could use such data for manipulator path planning.

> Regards
>
> Martin

Cheers,

Jochen
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

Martin Bertsche
Hi Jochen,

maybe my case is somewhat different from the one of Jan Oliver. I get
clustered data that only contains the object and no background. I also
used a much higher angle increment of about 30 degrees. But in essence I
did the same thing. I turned the object and not the Sensor. So I only
have 12 point clouds that I want to merge. The object is a standard
plastic can for coffee cream as you can buy in any German supermarket.

Despite the 30 degree angle the overlap maybe 800 of Points1200 should
still be enough for an acceptable registration. However what I don't
know (don't want to know because the process is to be partially
automatized) in advance is the rotational axis about which I am turning
the Object. Firstly, because right now I don't have a well defined
set-up such that it would make sense to calculate it and give ICP a
proper initial transformation. Secondly I would like to make the
acquisition process as easy as possible so it can be done by an
undergraduate without much explanation. (Tape a polar coordinate system
to a table, place the sensor about a meter away, run acquisition
software and wait for prompt)

My attempts to perform registration using ICP with these 12 point clouds
were unsuccessful. I always registered the current cloud onto the result
cloud from the last increment and added the result to a separate result
cloud. I used the RANSAC outlier rejection and correspondence distance
threshold from 0.0001 up to 10.0. With almost always the same result.
The single clouds were piled on top of each other as if it were trying
to find the best match using all points.

Now it seems I was wrong all along. I will take a look at the code again
and try a little more. If I run into the same problem it would be very
kind if you could take a look at the data and code and tell me if it
makes sense what I am doing. Maybe this way we can provide a beginner
friendly example on how to merge partially overlapping point clouds in
addition to the one on the web.

Cheers
Martin

On 16.09.2011 15:00, Jochen Sprickerhof wrote:

> * Martin Bertsche<[hidden email]>  [2011-09-16 13:45]:
>> Hi,
> Hi Martin,
>
>> I have been thinking about the same issue for some time now. As
>> Jochen said ICP, does not need equally sized Point clouds. However
>> it seems that it always tries to match as many points as possible
>> and does not throw away any of
>> them in order to get a better match result. This is perfectly
>> plausible since the optimal result would be then to discard all
>> correspondences between the two point clouds an hence get an optimal
>> distance of zero.
> You can change the distance parameter (setMaxCorrespondenceDistance) to
> adapt this behaviour. Also you can use Outlier Rejection
> (setRANSACOutlierRejectionThreshold) to improve your results. If you
> know the overlapping part of two point clouds, you can match only these
> as well, but ICP will not blindly use all points.
>
>> In your case you will always have the rotational axis is unknown due
>> to the 2.5D nature of your data. So Transformation estimation will
>> be a game of chance. ICP however has two possible starting points.
>> One is an initial Transformation, the other is an initial
>> correspondence vector. Due to the nature of your setup you only want
>> to take certain parts of the source and target clouds (the ones that
>> overlap) and match these parts.
> I don't know the exact setup Jan Oliver uses, but I guess he gets 3D
> point clouds out of it that overlap partly. I wouldn't say that
> transformation estimation is a game of chance than, given correct
> correspondences ICP will compute a transformation within the accuracy of
> your sensor. Regarding the starting points, ICP by default assumes no
> transformation and will converge into an optimal transformation by
> finding new correspondences, so the only starting point is a arbitrary
> transformation (if you would have the "correct" correspondences, you can
> compute a transformation in one shot).
> For the parts of the point clouds to take it depends on the setup as
> well. If your setup senses the background of the sample as well, you
> obviously want to remove it because most probably it undermine the
> matching process. On the other hand, if you would be able to tell witch
> parts overlap, registration would be done ;).
>
>> So what I thought of was to use some local descriptor from the
>> features library in order to find correspondences between the two
>> point clouds that should mostly lie inside the overlapping parts.
>> The quality of course strongly depends on the shape of your object.
>> Using a kinect i plan to use PointXYZRGB to get better
>> correspondences. After feeding only the two overlapping parts to ICP
>> I would apply the transformation to the whole point cloud and add it
>> to the first one. a. s. o. ...
>>
>> I would be very happy if you could try this and maybe supply me with
>> an implementation. For me it is not such a big issue right now
>> because VFH seems to be sufficient for my purposes. However my
>> fellow students could use such data for manipulator path planning.
>> Regards
>>
>> Martin
> Cheers,
>
> Jochen
> _______________________________________________
> [hidden email] / http://pointclouds.org
> http://pointclouds.org/mailman/listinfo/pcl-users
>
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Martin Bertsche
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

Jochen Sprickerhof
Administrator
* Martin Bertsche <[hidden email]> [2011-09-16 16:19]:
> Hi Jochen,
>
> maybe my case is somewhat different from the one of Jan Oliver. I
> get clustered data that only contains the object and no background.
> I also used a much higher angle increment of about 30 degrees. But
> in essence I did the same thing. I turned the object and not the
> Sensor. So I only have 12 point clouds that I want to merge. The
> object is a standard plastic can for coffee cream as you can buy in
> any German supermarket.

do you mean something like this?
http://www.ghi-shop.de/WebRoot/Store16/Shops/61458946/48EB/B985/B962/B46F/59D4/C0A8/28B9/F106/IMGP8332.jpg
I guess applying ICP will be hard as it looks almost the same from
different angles.

> Despite the 30 degree angle the overlap maybe 800 of Points1200
> should still be enough for an acceptable registration. However what
> I don't know (don't want to know because the process is to be
> partially automatized) in advance is the rotational axis about which
> I am turning the Object. Firstly, because right now I don't have a
> well defined set-up such that it would make sense to calculate it
> and give ICP a proper initial transformation. Secondly I would like
> to make the acquisition process as easy as possible so it can be
> done by an undergraduate without much explanation. (Tape a polar
> coordinate system to a table, place the sensor about a meter away,
> run acquisition software and wait for prompt)
>
> My attempts to perform registration using ICP with these 12 point
> clouds were unsuccessful. I always registered the current cloud onto
> the result cloud from the last increment and added the result to a
> separate result cloud. I used the RANSAC outlier rejection and
> correspondence distance threshold from 0.0001 up to 10.0. With
> almost always the same result. The single clouds were piled on top
> of each other as if it were trying to find the best match using all
> points.

did you try different distances as well? Which unit do you use in your
point clouds (the parameter depends on it ;) ). Also you can try to
print or visualize the correspondences to get a clue whats going on.

> Now it seems I was wrong all along. I will take a look at the code
> again and try a little more. If I run into the same problem it would
> be very kind if you could take a look at the data and code and tell
> me if it makes sense what I am doing. Maybe this way we can provide
> a beginner friendly example on how to merge partially overlapping
> point clouds in addition to the one on the web.
>
> Cheers
> Martin

Cheers,

Jochen
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

Martin Bertsche


On 16.09.2011 16:37, Jochen Sprickerhof wrote:

> * Martin Bertsche<[hidden email]>  [2011-09-16 16:19]:
>> Hi Jochen,
>>
>> maybe my case is somewhat different from the one of Jan Oliver. I
>> get clustered data that only contains the object and no background.
>> I also used a much higher angle increment of about 30 degrees. But
>> in essence I did the same thing. I turned the object and not the
>> Sensor. So I only have 12 point clouds that I want to merge. The
>> object is a standard plastic can for coffee cream as you can buy in
>> any German supermarket.
> do you mean something like this?
> http://www.ghi-shop.de/WebRoot/Store16/Shops/61458946/48EB/B985/B962/B46F/59D4/C0A8/28B9/F106/IMGP8332.jpg
> I guess applying ICP will be hard as it looks almost the same from
> different angles.
Ah no it's the bigger type the one with 125 or 250 ml.

>> Despite the 30 degree angle the overlap maybe 800 of Points1200
>> should still be enough for an acceptable registration. However what
>> I don't know (don't want to know because the process is to be
>> partially automatized) in advance is the rotational axis about which
>> I am turning the Object. Firstly, because right now I don't have a
>> well defined set-up such that it would make sense to calculate it
>> and give ICP a proper initial transformation. Secondly I would like
>> to make the acquisition process as easy as possible so it can be
>> done by an undergraduate without much explanation. (Tape a polar
>> coordinate system to a table, place the sensor about a meter away,
>> run acquisition software and wait for prompt)
>>
>> My attempts to perform registration using ICP with these 12 point
>> clouds were unsuccessful. I always registered the current cloud onto
>> the result cloud from the last increment and added the result to a
>> separate result cloud. I used the RANSAC outlier rejection and
>> correspondence distance threshold from 0.0001 up to 10.0. With
>> almost always the same result. The single clouds were piled on top
>> of each other as if it were trying to find the best match using all
>> points.
> did you try different distances as well? Which unit do you use in your
> point clouds (the parameter depends on it ;) ). Also you can try to
> print or visualize the correspondences to get a clue whats going on.
>
>> Now it seems I was wrong all along. I will take a look at the code
>> again and try a little more. If I run into the same problem it would
>> be very kind if you could take a look at the data and code and tell
>> me if it makes sense what I am doing. Maybe this way we can provide
>> a beginner friendly example on how to merge partially overlapping
>> point clouds in addition to the one on the web.
>>
>> Cheers
>> Martin
> Cheers,
>
> Jochen
> _______________________________________________
> [hidden email] / http://pointclouds.org
> http://pointclouds.org/mailman/listinfo/pcl-users
>
As to which length unit I am getting. I don't have a clue right now and
I can't check because i got to go. However I hope the data returned by
the kinect is in SI units in this case meters. That's why I chose this
range for the parameters. I expected good results for 0.001 and 0.0001
everything else was just frustration :)

'til next week

Martin
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Martin Bertsche
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

Nicola Fioraio
Hi Martin,

if you get the Kinect data from the PCL interface...don't worry, distances are in meters ;)

cheers
--Nicola

On Fri, Sep 16, 2011 at 5:13 PM, Martin Bertsche <[hidden email]> wrote:


On 16.09.2011 16:37, Jochen Sprickerhof wrote:
* Martin Bertsche<[hidden email]>  [2011-09-16 16:19]:
Hi Jochen,

maybe my case is somewhat different from the one of Jan Oliver. I
get clustered data that only contains the object and no background.
I also used a much higher angle increment of about 30 degrees. But
in essence I did the same thing. I turned the object and not the
Sensor. So I only have 12 point clouds that I want to merge. The
object is a standard plastic can for coffee cream as you can buy in
any German supermarket.
do you mean something like this?
http://www.ghi-shop.de/WebRoot/Store16/Shops/61458946/48EB/B985/B962/B46F/59D4/C0A8/28B9/F106/IMGP8332.jpg
I guess applying ICP will be hard as it looks almost the same from
different angles.
Ah no it's the bigger type the one with 125 or 250 ml.
Despite the 30 degree angle the overlap maybe 800 of Points1200
should still be enough for an acceptable registration. However what
I don't know (don't want to know because the process is to be
partially automatized) in advance is the rotational axis about which
I am turning the Object. Firstly, because right now I don't have a
well defined set-up such that it would make sense to calculate it
and give ICP a proper initial transformation. Secondly I would like
to make the acquisition process as easy as possible so it can be
done by an undergraduate without much explanation. (Tape a polar
coordinate system to a table, place the sensor about a meter away,
run acquisition software and wait for prompt)

My attempts to perform registration using ICP with these 12 point
clouds were unsuccessful. I always registered the current cloud onto
the result cloud from the last increment and added the result to a
separate result cloud. I used the RANSAC outlier rejection and
correspondence distance threshold from 0.0001 up to 10.0. With
almost always the same result. The single clouds were piled on top
of each other as if it were trying to find the best match using all
points.
did you try different distances as well? Which unit do you use in your
point clouds (the parameter depends on it ;) ). Also you can try to
print or visualize the correspondences to get a clue whats going on.

Now it seems I was wrong all along. I will take a look at the code
again and try a little more. If I run into the same problem it would
be very kind if you could take a look at the data and code and tell
me if it makes sense what I am doing. Maybe this way we can provide
a beginner friendly example on how to merge partially overlapping
point clouds in addition to the one on the web.

Cheers
Martin
Cheers,

Jochen
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users

As to which length unit I am getting. I don't have a clue right now and I can't check because i got to go. However I hope the data returned by the kinect is in SI units in this case meters. That's why I chose this range for the parameters. I expected good results for 0.001 and 0.0001 everything else was just frustration :)

'til next week

Martin
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users


_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

luis_alex

Hi Martin.

Did you ever get to register the point clouds properly? I was just reading this thread and I have similar problems:
I am trying to register clouds of objects obtained using a turn-table (from the RGB-D dataset here: http://www.cs.washington.edu/rgbd-dataset/).

I have made now several test using ICP (linear and non-linear), even with my own point types that include not only the position but also color, normals, curvature (tried all combinations) and to no avail: the merged clouds are as you said above, mostly on top of one another and not placed "side-by-side".

I am now starting to feel frustrated as I do not know what to try next. I thought this would be a almost trivial task since the clouds are segmented and the difference between consecutive clouds is small (although I have also tried registering these clouds jumping some intermediate ones).

Any suggestions would be greatly appreciated.
Luis
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

Jochen Sprickerhof
Administrator
* luis_alex <[hidden email]> [2012-03-29 03:26]:
>
> Hi Martin.

Hi Luis,

> Did you ever get to register the point clouds properly? I was just reading
> this thread and I have similar problems:
> I am trying to register clouds of objects obtained using a turn-table (from
> the RGB-D dataset here:  http://www.cs.washington.edu/rgbd-dataset/
> http://www.cs.washington.edu/rgbd-dataset/ ).
>
> I have made now several test using ICP (linear and non-linear), even with my
> own point types that include not only the position but also color, normals,
> curvature (tried all combinations) and to no avail: the merged clouds are as
> you said above, mostly on top of one another and not placed "side-by-side".

Adding extra information will not help as ICP only uses XYZ.

> I am now starting to feel frustrated as I do not know what to try next. I
> thought this would be a almost trivial task since the clouds are segmented
> and the difference between consecutive clouds is small (although I have also
> tried registering these clouds jumping some intermediate ones).

Try different parameters for ICP (search the mailing list for hints how
to do it).

> Any suggestions would be greatly appreciated.
> Luis

Cheers,

Jochen
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

luis_alex

Hi Jochen.

The problem is that to register a coffee mug (one object I have been working with) we can't use only shape because otherwise the solution is really to overlay the pcds (the shape from most views is similar). Color has to be used because the mug has a pattern printed on it: if we used also color information then a proper alignment might be found.

Is there a registration method in PCL that uses color info?

Cheers,
Luis

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

Jochen Sprickerhof
Administrator
* luis_alex <[hidden email]> [2012-03-29 05:49]:

>
> Hi Jochen.
>
> The problem is that to register a coffee mug (one object I have been working
> with) we can't use only shape because otherwise the solution is really to
> overlay the pcds (the shape from most views is similar). Color has to be
> used because the mug has a pattern printed on it: if we used also color
> information then a proper alignment might be found.
>
> Is there a registration method in PCL that uses color info?

Have a look at
http://pointclouds.org/documentation/tutorials/registration_api.php#registration-api

> Cheers,
> Luis
>
>
>
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

luis_alex
Jochen Sprickerhof wrote
* luis_alex <[hidden email]> [2012-03-29 05:49]:
>
> Hi Jochen.
>
> The problem is that to register a coffee mug (one object I have been working
> with) we can't use only shape because otherwise the solution is really to
> overlay the pcds (the shape from most views is similar). Color has to be
> used because the mug has a pattern printed on it: if we used also color
> information then a proper alignment might be found.
>
> Is there a registration method in PCL that uses color info?

Have a look at
http://pointclouds.org/documentation/tutorials/registration_api.php#registration-api
I have seen that and I have also "played" around with keypoint-based registration (using iccv2011 tutorial code) but I got bad correspondences between keypoints. Guess I have to go back and check all the parameters more carefully.

Since registering clouds from objects captured in a turn-table seams to me to be a "basic" thing (when compared to the remaining much less constrained stuff), I thought someone had already figured the problem out and there would be a good recipe for doing this around somewhere.

Thanks anyway!
Luis
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

berker
Luis, Martin,

Have you had any progress? Please post if you have any successful result.
Middle East Technical University
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

luis_alex
Hi Berker.

I have been working on a different subject but will go back to this soon.
I will post any findings in this post.
Please do that also.

Cheers,
Luis
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Merging two point clouds obtained from different angles

super_wormy
This post has NOT been accepted by the mailing list yet.
As it has been a long time since this was posted, I wonder anyone figured out how to do the trick. I'm currently facing this problem. I'm using Intel Realsense to get (2 or more) point clouds of an upper body (or maybe an object) and try to merge them.

As I know, this (http://vladlen.info/publications/fast-global-registration/) is the one of best algorithms at the present, but it doesn't work fast enough. Anyone succeeded to make it work? Please let me know if there is any other ways.

Thanks in advance.
Loading...