Generating my own depth image from Asus Xtion Pro Live's IR image?

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Generating my own depth image from Asus Xtion Pro Live's IR image?

ngoonee
I'm trying to use the Asus Xtion Pro Live with a reflective surface
and obviously running into various problems. In trying to deal with
that, I noticed that a bit of motion blur (from vibrating the camera
slightly) made the depth image MUCH better, with minimal holes due to
the reflective surfaces. Unfortunately my application calls for a
fixed camera location, but I was wondering whether I could replicate
that effect by applying a filter on the IR stream.

Of course, I'd need to know how the IR stream is used to generate the
depth image (or the point cloud, I can convert between them fairly
easily). Any ideas? Google doesn't seem to show anyone doing that.
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|

Re: Generating my own depth image from Asus Xtion Pro Live's IR image?

kwaegel
Administrator
I suspect you are getting better data because shaking the camera blurs the world portion of the image, but leaves the projected IR pattern sharp (since the camera and projector move in sync). There have been a few papers on this effect.

I doubt this is something you can replicate with a software filter, since identifying the non-dot portions of the image to blur is equivalent to finding the dots in the first place.

Generating a depth image from the IR image is equivalent to depth-from-stereo, but with one camera replaced by a projector with a known pattern of feature points. Calculating the depth requires matching the point patterns and calculating the offset in pixels, than using some math to get a metric distance value.

Duplicating this process yourself requires a) knowing the projected dot pattern, and b) knowing the offset between the IR projector and camera. Calculating these two might be a bit tricky.
Reply | Threaded
Open this post in threaded view
|

Re: Generating my own depth image from Asus Xtion Pro Live's IR image?

ngoonee
On Thu, Mar 19, 2015 at 5:31 PM, kwaegel <[hidden email]> wrote:
> I suspect you are getting better data because shaking the camera blurs the
> world portion of the image, but leaves the projected IR pattern sharp (since
> the camera and projector move in sync). There have been a  few papers
> <http://www.cs.unc.edu/~fuchs/kinect_VR_2012.pdf>   on this effect.

Thanks, yes this seems to be the case.

> Generating a depth image from the IR image is equivalent to
> depth-from-stereo, but with one camera replaced by a projector with a
> /known/ pattern of feature points. Calculating the depth requires matching
> the point patterns and calculating the offset in pixels, than using some
> math to get a metric distance value.
>
> Duplicating this process yourself requires a) knowing the projected dot
> pattern, and b) knowing the offset between the IR projector and camera.
> Calculating these two might be a bit tricky.

The offset you mention in (b) is known and advertised by the camera
itself, easily obtained from openni2 (and in it's PCL wrapper as well.

a) is the one which worries me. Seems that PCL via openni2 just hooks
into a depth map/IR map (or depth map/RGB image) output from the
camera, and that the matching occurs on-camera rather than in
software. In retrospect this seems obvious, but I come from a stereo
background where processing is almost always off-camera.

So it seems my search down this path is practically a dead end.
Another research work where I am seems to show the kinectv2 performing
better on similar reflective surfaces though, so I think I'll check
that out (it's almost double the cost price unfortunately).

Thank you for your help kwaegel
_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users