Pose calibration of two static RGB-D sensors that facing each other.

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Pose calibration of two static RGB-D sensors that facing each other.

Dmitry
This post was updated on .
Hello, guys!

We need high precision pose calibration of two static identical RGB-D sensors ( 2 x Kinect v. 2  or  2 x Intel RealSense SR300 ) in system, where they facing each other.

According to our idea, we want to combine clouds from these sesors to track with high precision small objects like a pen in hand  (with a deviation of not more than one millimeter).

For more than a month we have been looking for at least some information on this issue (Google, ROS books, forums), and found only one solution: calibration using only RGB with OpenCV https://github.com/jbohren-forks/camera_pose, but this solution does not use a depth camera at all, and therefore half of the information is simply discarded.

Perhaps you had experience working with similar systems, or you know the literature or scientific work in this area, please give us advice.

(P.S. I apologize for my bad English)
Reply | Threaded
Open this post in threaded view
|

Re: Pose calibration of two static RGB-D sensors that facing each other.

Oliver Arend
> We need *high precision pose calibration* of two static identical RGB-D sensors ( /2 x Kinect v. 2  or  2 x Intel SR 300/ ) in system, where they facing each other.
> According to our idea, we want to combine clouds from these sesors to track with high precision small objects like a pen in hand  (with a deviation of not more than one millimeter).

We are using two Kinect V2s to determine the size of paper bales (size approx. 1 m^3), and I took 20 consecutive pictures of the same bale without moving it.
It shows considerable noise everywhere, but it is especially bad (depth/no depth in about 50 % of the pictures each) around edges, i. e. where there is an object in the foreground and some other object or surface in the background. Since the minimum distance of a Kinect for depth measurement is about 500 mm, the spatial resolution might not even be enough to pick up a pen with more than 1 or 2 pixels of width, so you risk missing your pen entirely in some of your pictures.
Those issues might be mitigated if you have no IR sources (windows, lamps) apart the Kinect itself where you conduct your experiments.
If you want high precision, I think you need a different camera. I don't have any experience with the Intel.

Oliver

_______________________________________________
[hidden email] / http://pointclouds.org
http://pointclouds.org/mailman/listinfo/pcl-users
Reply | Threaded
Open this post in threaded view
|

Re: Pose calibration of two static RGB-D sensors that facing each other.

Dmitry
Oliver, thanks for the answer.

I agree with you. In accordance with the comparison table at the bottom of this page https://stimulant.com/depth-sensor-shootout-2/, we came to the conclusion that the RealSense SR300 is best suited for our situation.

But all the sensors mentioned in this list are non-professional (like 3D scanners), it is hard to find comparison for the professional sensors.

Do you know any good ones?
Reply | Threaded
Open this post in threaded view
|

Re: Pose calibration of two static RGB-D sensors that facing each other.

james
Administrator
This post was updated on .
I have done it with 2x SR300.
Keep in mind noise from each camera interfering with the other does certainly come into play, and so I'm not sure this would be suitable for your fine requirements.
I calibrated with transformations from manually generated parameters about the translation and rotation between each camera and the centre, i.e. x y z xrot yrot zrot from each camera to centre of pair.
Reply | Threaded
Open this post in threaded view
|

Re: Pose calibration of two static RGB-D sensors that facing each other.

Dmitry
Hello James! Thanks for your response!

What positioning accuracy did you achieve in your project?
How did you calibrate 2 x SR300 relative to some center? Did you use only an RGB camera?
Or do you mean, that just measured the distance between the cameras in the real world and then brought these parameters to the application?