Obtaining a point cloud with Kinec1 vs Kinect2

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Obtaining a point cloud with Kinec1 vs Kinect2

Silex
Hi all,

I know this is not 100% related to pcl, but I though I ask it here also, since lot of people use pcl with point clouds obtained from Kinect.

My problem is the following:

At my university we have several Kinect 1's and Kinect 2's. I am testing the quality of the Kinect Fusion results on both device and unexpectedly Kinect 2 produces worst results.

My testing environment:

- Static camera scanning a static scene.

In this case if I check both results from Kinect 1 and 2, then it looks like Kinect 2 has a way smoother and nicer resulting point cloud, but if I check the scans from a different angle, then you can see the that Kinect 2 result is way worst even if the point cloud is smoother. As you can see on the pictures if I check the resulting point cloud from the same view as the camera was, then it looks nice, but as soon as I check it from a different angle then the Kinect 2 result is horrible, can't even tell that in the red circle there is a mug.




- Moving camera scanning a static scene

In this case Kinect 2 has even worst results, then in the above mentioned case compared to Kinect 1. Actually I can't even reconstruct with Kinect 2 if I am moving it. On the other hand Kinect 1 does a pretty good job with moving camera.

Does anybody have any idea why is the Kinect 2 failing these tests against Kinect 1? As I mentioned above we have several Kinect cameras at my university and I tested more then one of them each, so this should not be a hardware problem.
Reply | Threaded
Open this post in threaded view
|

Re: Obtaining a point cloud with Kinec1 vs Kinect2

Michael Korn
Did you modify the intrinsic camera matrix? This should be the first, most important (and very easy) step.
But there is a more complex problem: In my opinion we need to consider radial distortion because of the new focal length. This is not a part of the current code  and several changes are necessary.
I saw the distortion in the depth images during some tests with the new Kinect and I found more information here: https://github.com/OpenKinect/libfreenect2/issues/41
Reply | Threaded
Open this post in threaded view
|

Re: Obtaining a point cloud with Kinec1 vs Kinect2

Silex
Hi, I didn't modify anything since I was testing it with the provided Kinect Fusion example by Microsoft.

For Kinect 1 I used the provided example in the 1.8 SDK
and
for Kinect 2 I used the provided example int the 2.0 SDK

It's just weired, that the upgrade of a product (Kinect 1 -> Kinect 2) and an upgrade of a development kit (SDK 1.8 -> SDK 2.0) produces worst results...
Reply | Threaded
Open this post in threaded view
|

Re: Obtaining a point cloud with Kinec1 vs Kinect2

samontab
In reply to this post by Silex
Hello Silex,

I think the difference in your scans is due to the fact that the Kinect 1 uses Structured Light, whereas the Kinect 2 uses Time of Flight to determine the range. These two techniques sample the world in different ways, and therefore have slight differences, as you were able to see.

The area that you marked highlights this difference because it shows the edges of an object. All sensors based on laser returns, like the kinect2, will exhibit this issue in the raw data. Laser returns from the edges of an object is noisy. That's just how they work.
Reply | Threaded
Open this post in threaded view
|

Re: Obtaining a point cloud with Kinec1 vs Kinect2

Silex
Yea, you might be right. Is there any way to get rid of those noisy edges? Because like this for object detection it is completely useless in my point of view.

2015-02-04 18:06 GMT-05:00 samontab <[hidden email]>:
Hello Silex,

I think the difference in your scans is due to the fact that the Kinect 1
uses Structured Light, whereas the Kinect 2 uses Time of Flight to determine
the range. These two techniques sample the world in different ways, and
therefore have slight differences, as you were able to see.

The area that you marked highlights this difference because it shows the
edges of an object. All sensors based on laser returns, like the kinect2,
will exhibit this issue in the raw data. Laser returns from the edges of an
object is noisy. That's just how they work.




--
View this message in context: http://www.pcl-users.org/Obtaining-a-point-cloud-with-Kinec1-vs-Kinect2-tp4037069p4037194.html
Sent from the Point Cloud Library (PCL) Users mailing list mailing list archive at Nabble.com.
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users


_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|

Re: Obtaining a point cloud with Kinec1 vs Kinect2

Michael Korn
In reply to this post by Silex
I don't use the Microsoft SDK, but I suppose they didn't modify the intrinsic camera parameters.
The results I get with KinFu from PCL (after setting the camera parameters) and Kinect V2 are OK. One can see the bottleneck isn't any longer the real resolution of the camera, but the resolution of the voxel grid.
However the new Kinect brings new challenges:
1) radial distortion
2) in some region extreme noisy data (e.g. computer display, +-10cm)
3) reflections (e.g. black laptop case or Kinect 1 case and ~45° perspective: I get always the depth values from the white wall behind)
Reply | Threaded
Open this post in threaded view
|

Re: Obtaining a point cloud with Kinec1 vs Kinect2

Silex
Hi Michael,

Thanks for sharing your experiences!

Is it possible that you share a .ply or .pcd file of a test scene with Kinect V2?
or
Is it possible that you show two images of a test scene form a view where the camera was and a different view from that?

Thanks in advance!

2015-02-09 9:06 GMT-05:00 Michael Korn <[hidden email]>:
I don't use the Microsoft SDK, but I suppose they didn't modify the intrinsic
camera parameters.
The results I get with KinFu from PCL (after setting the camera parameters)
and Kinect V2 are OK. One can see the bottleneck isn't any longer the real
resolution of the camera, but the resolution of the voxel grid.
However the new Kinect brings new challenges:
1) radial distortion
2) in some region extreme noisy data (e.g. computer display, +-10cm)
3) reflections (e.g. black laptop case or Kinect 1 case and ~45°
perspective: I get always the depth values from the white wall behind)



--
View this message in context: http://www.pcl-users.org/Obtaining-a-point-cloud-with-Kinec1-vs-Kinect2-tp4037069p4037248.html
Sent from the Point Cloud Library (PCL) Users mailing list mailing list archive at Nabble.com.
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users


_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|

Re: Obtaining a point cloud with Kinec1 vs Kinect2

Michael Korn
Sorry for the late reply.
As you mention KinFu I created two screenshots with KinFu (PCL Version) with correct intrinsic parameters.
Unfortunately, all my recorded datasets contain moving objects, but it should not be a problem here.
On the one hand you can see fine structures (cable, Ethernet switch, power adapter) at the wall.
But on the other hand there are lot of issues.
Marked with red you can noise at a computer monitor and a black computer case.
Marked with green the right and top side of a grey drawer unit are missed. Instead the depth data comes from the white wall (reflection).
You can see reflections from the wall on a moving robot, too (Kinect 1 and black notebook marked with blue). The robot is not part of the reconstruction due to movement.
Moreover, you can find reflections of the wall on the ground (orange).

Reply | Threaded
Open this post in threaded view
|

Re: Obtaining a point cloud with Kinec1 vs Kinect2

Silex
Hi Michael,

Thanks for sharing your experience and images!!! I have similar problems what you have on the images.
I am working on some methods, with which I can make Kinect v2 reconstruction more viable, will post the results here.

Thanks once more!