How to only see the floor from a kinect camera

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

How to only see the floor from a kinect camera

Dcannt
Hi, I have been working with the point cloud that a kinect camera gives to
me, I have done the region growing and normal estimation of the point cloud,
and now I suppose that I have all in clusters, so I need help to only use
the cluster that is the floor of the points of the image that the kinect is
giving to me .



--
Sent from: http://www.pcl-users.org/
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|

Re: How to only see the floor from a kinect camera

Sérgio Agostinho
Do you know which cluster that is? Or you just have a number of clusters
and one of them is the floor?

Cheers



On 21-12-2017 17:22, Dcannt wrote:

> Hi, I have been working with the point cloud that a kinect camera gives to
> me, I have done the region growing and normal estimation of the point cloud,
> and now I suppose that I have all in clusters, so I need help to only use
> the cluster that is the floor of the points of the image that the kinect is
> giving to me .
>
>
>
> --
> Sent from: http://www.pcl-users.org/
> _______________________________________________
> [hidden email] / http://pointclouds.org
> http://pointclouds.org/mailman/listinfo/pcl-users

_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|

Re: How to only see the floor from a kinect camera

Stephen McDowell
If you don’t know what cluster it is and need to search, I think you just need to check the normal orientation and/or the height.

1. The normals should all be pointing in roughly the same direction.  This should help you identify clusters like a table top or the floor.  If you are doing the fast normal comparison methods, though, the normals will be much less reliable.  The IntegralImage2D class produces some really excellent normals (which you can use, since you’re using a Kinect — it’s a “structured” point cloud).  It may be too slow, though.  On my machine the AVERAGE_3D_GRADIENT method operated around 20 milliseconds, the COVARIANCE_METHOD took around 40 milliseconds.  20ms is a long time if you need to process this online, but if you don’t then go for it!

2. You can probably get away with just scanning the clusters and finding the ones with the lowest height.  For example, if +y is pointing up, your floor will be the lowest values.

I don’t think either of these are rock-solid.  If your point cloud isn’t level (e.g. the “ideal” floor is in the x-z plane, but your camera is angled down toward the ground) then lowest possible “height” is not necessarily a good metric.  Basically, the common traits of the floor should be

i) The normals are all roughly the same.
ii) The positions spread in a plane-like manner.  It just may not be that the plane is exactly Y=0 or something like that.

But the normals and positions are what I would start investigating :)  Hope that helps / inspires something more on point!


> On Dec 21, 2017, at 11:20 AM, Sérgio Agostinho <[hidden email]> wrote:
>
> Do you know which cluster that is? Or you just have a number of clusters and one of them is the floor?
>
> Cheers
>
>
>
> On 21-12-2017 17:22, Dcannt wrote:
>> Hi, I have been working with the point cloud that a kinect camera gives to
>> me, I have done the region growing and normal estimation of the point cloud,
>> and now I suppose that I have all in clusters, so I need help to only use
>> the cluster that is the floor of the points of the image that the kinect is
>> giving to me .
>>
>>
>>
>> --
>> Sent from: http://www.pcl-users.org/
>> _______________________________________________
>> [hidden email] / http://pointclouds.org
>> http://pointclouds.org/mailman/listinfo/pcl-users
>
> _______________________________________________
> [hidden email] / http://pointclouds.org
> http://pointclouds.org/mailman/listinfo/pcl-users

_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|

Re: How to only see the floor from a kinect camera

Dcannt
I don't need what cluster is which I need to use, so I need to look for all the clusters and find the one that it's the floor, I supposed that the floor probably is the region that has more points on it and also the one that has the normals pointing on the contrary direction of the camera, because this is pointing to the floor

2017-12-22 18:30 GMT+01:00 Stephen McDowell <[hidden email]>:
If you don’t know what cluster it is and need to search, I think you just need to check the normal orientation and/or the height.

1. The normals should all be pointing in roughly the same direction.  This should help you identify clusters like a table top or the floor.  If you are doing the fast normal comparison methods, though, the normals will be much less reliable.  The IntegralImage2D class produces some really excellent normals (which you can use, since you’re using a Kinect — it’s a “structured” point cloud).  It may be too slow, though.  On my machine the AVERAGE_3D_GRADIENT method operated around 20 milliseconds, the COVARIANCE_METHOD took around 40 milliseconds.  20ms is a long time if you need to process this online, but if you don’t then go for it!

2. You can probably get away with just scanning the clusters and finding the ones with the lowest height.  For example, if +y is pointing up, your floor will be the lowest values.

I don’t think either of these are rock-solid.  If your point cloud isn’t level (e.g. the “ideal” floor is in the x-z plane, but your camera is angled down toward the ground) then lowest possible “height” is not necessarily a good metric.  Basically, the common traits of the floor should be

i) The normals are all roughly the same.
ii) The positions spread in a plane-like manner.  It just may not be that the plane is exactly Y=0 or something like that.

But the normals and positions are what I would start investigating :)  Hope that helps / inspires something more on point!


> On Dec 21, 2017, at 11:20 AM, Sérgio Agostinho <[hidden email]> wrote:
>
> Do you know which cluster that is? Or you just have a number of clusters and one of them is the floor?
>
> Cheers
>
>
>
> On 21-12-2017 17:22, Dcannt wrote:
>> Hi, I have been working with the point cloud that a kinect camera gives to
>> me, I have done the region growing and normal estimation of the point cloud,
>> and now I suppose that I have all in clusters, so I need help to only use
>> the cluster that is the floor of the points of the image that the kinect is
>> giving to me .
>>
>>
>>
>> --
>> Sent from: http://www.pcl-users.org/
>> _______________________________________________
>> [hidden email] / http://pointclouds.org
>> http://pointclouds.org/mailman/listinfo/pcl-users
>
> _______________________________________________
> [hidden email] / http://pointclouds.org
> http://pointclouds.org/mailman/listinfo/pcl-users

_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users


_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|

Re: How to only see the floor from a kinect camera

Ben_
Hi, I have recently done this. You can use RANSAC segmentation to find the largest plane, which is most likely the floor. I used:

                pcl::ModelCoefficients::Ptr coefficients(new pcl::ModelCoefficients());
pcl::PointIndices::Ptr inliers(new pcl::PointIndices());
// Create the segmentation object
pcl::SACSegmentation<pcl::PointXYZ> seg;
// Optional
seg.setOptimizeCoefficients(true);
seg.setModelType(pcl::SACMODEL_PLANE);
seg.setMethodType(pcl::SAC_RANSAC);

seg.setMaxIterations(1000);
seg.setDistanceThreshold(0.005);

// Create the filtering object
pcl::ExtractIndices<pcl::PointXYZ> extract;

int i = 0, nr_points = (int)cloud_f->points.size();
// Segment the largest planar component from the remaining cloud
seg.setInputCloud(cloud_f);
seg.segment(*inliers, *coefficients);
if (inliers->indices.size() == 0)
{
std::cerr << "Could not estimate a planar model for the given dataset." << std::endl;
break;
}

// Extract the inliers
extract.setInputCloud(cloud_f);
extract.setIndices(inliers);
extract.setNegative(false);
extract.filter(*cloud_p);   // cloud_p will be the largest planar cluster.

On Wed, Jan 3, 2018 at 9:34 PM, Daniel Cantón <[hidden email]> wrote:
I don't need what cluster is which I need to use, so I need to look for all the clusters and find the one that it's the floor, I supposed that the floor probably is the region that has more points on it and also the one that has the normals pointing on the contrary direction of the camera, because this is pointing to the floor

2017-12-22 18:30 GMT+01:00 Stephen McDowell <[hidden email]>:
If you don’t know what cluster it is and need to search, I think you just need to check the normal orientation and/or the height.

1. The normals should all be pointing in roughly the same direction.  This should help you identify clusters like a table top or the floor.  If you are doing the fast normal comparison methods, though, the normals will be much less reliable.  The IntegralImage2D class produces some really excellent normals (which you can use, since you’re using a Kinect — it’s a “structured” point cloud).  It may be too slow, though.  On my machine the AVERAGE_3D_GRADIENT method operated around 20 milliseconds, the COVARIANCE_METHOD took around 40 milliseconds.  20ms is a long time if you need to process this online, but if you don’t then go for it!

2. You can probably get away with just scanning the clusters and finding the ones with the lowest height.  For example, if +y is pointing up, your floor will be the lowest values.

I don’t think either of these are rock-solid.  If your point cloud isn’t level (e.g. the “ideal” floor is in the x-z plane, but your camera is angled down toward the ground) then lowest possible “height” is not necessarily a good metric.  Basically, the common traits of the floor should be

i) The normals are all roughly the same.
ii) The positions spread in a plane-like manner.  It just may not be that the plane is exactly Y=0 or something like that.

But the normals and positions are what I would start investigating :)  Hope that helps / inspires something more on point!


> On Dec 21, 2017, at 11:20 AM, Sérgio Agostinho <[hidden email]> wrote:
>
> Do you know which cluster that is? Or you just have a number of clusters and one of them is the floor?
>
> Cheers
>
>
>
> On 21-12-2017 17:22, Dcannt wrote:
>> Hi, I have been working with the point cloud that a kinect camera gives to
>> me, I have done the region growing and normal estimation of the point cloud,
>> and now I suppose that I have all in clusters, so I need help to only use
>> the cluster that is the floor of the points of the image that the kinect is
>> giving to me .
>>
>>
>>
>> --
>> Sent from: http://www.pcl-users.org/
>> _______________________________________________
>> [hidden email] / http://pointclouds.org
>> http://pointclouds.org/mailman/listinfo/pcl-users
>
> _______________________________________________
> [hidden email] / http://pointclouds.org
> http://pointclouds.org/mailman/listinfo/pcl-users

_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users


_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users



_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users