So I've got a handfull of Kinect point cloud captures, each within it's own space.
I also got the position of my differents Kinects in a single worldspace thanks to photscan (each expressed as x, y, z, omega, phi, kappa). How do I process each cloud to be placed in a single worldspace given the relevant camera position as the new "origin" of each cloud captured? I'd use a transformation matrix with pcl::transformPointCloud but how to use these parameters? 
I am pretty sure that x,y,z should be the translation. Try this out for
two clouds and visualize the results, best one cloud in red and one in blue. omega, phi and kappa might be the rotation around x,y and z? you can rotate as a next step around x with omega and see if the visualized result improves, if yes rotate also around y and z axis. Or do all 3 rotations at once... Otherwise read at Wikipedia maybe a search for worldspace might be a start? Am 29.07.2016 um 17:24 schrieb Rather Not: > So I've got a handfull of Kinect point cloud captures, each within it's own > space. > I also got the position of my differents Kinects in a single worldspace > thanks to photscan (each expressed as x, y, z, omega, phi, kappa). > > How do I process each cloud to be placed in a single worldspace given the > relevant camera position as the new "origin" of each cloud captured? > > I'd use a transformation matrix with pcl::transformPointCloud but how to use > these parameters? > > > > > >  > View this message in context: http://www.pclusers.org/Swappointcloudorigintp4042433.html > Sent from the Point Cloud Library (PCL) Users mailing list mailing list archive at Nabble.com. > _______________________________________________ > [hidden email] / http://pointclouds.org > http://pointclouds.org/mailman/listinfo/pclusers _______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
Yes the camera position and orientation have been determined and are expressed as:
x the translation on the x axis, y the translation on the y axis, z the translation on the z axis, omega the rotation around the x axis, phi the rotation around the x axis, kappa the rotation around the x axis. The tricky part is how to put many clouds into a worldspace using these parameters. I've read quite a bunch of documentation beyond wikis and my trials aren't going well as I'm quite new to matrix. 
I do not understand, you merge each cloud with one according parameter set. /* METHOD #2: Using a Affine3f This method is easier and less error prone */ Eigen::Affine3f transform = Eigen::Affine3f::Identity(); // Define a translation along the according x,y,z. transform.translation() << x, y, z; // I hope this works, call the rotate functions three times, the whole example code is from transform.rotate (Eigen::AngleAxisf (omega, Eigen::Vector3f::UnitX())); transform.rotate (Eigen::AngleAxisf (phi , Eigen::Vector3f::UnitY())); transform.rotate (Eigen::AngleAxisf (kappa, Eigen::Vector3f::UnitZ()));http://pointclouds.org/documentation/tutorials/matrix_transform.php Am 29.07.2016 um 18:00 schrieb Rather
Not:
Yes the camera position and orientation have been determined and are expressed as: x the translation on the x axis, y the translation on the y axis, z the translation on the z axis, omega the rotation around the x axis, phi the rotation around the x axis, kappa the rotation around the x axis. The tricky part is how to put many clouds into a worldspace using these parameters. I've read quite a bunch of documentation beyond wikis and my trials aren't going well as I'm quite new to matrix.  View this message in context: http://www.pclusers.org/Swappointcloudorigintp4042433p4042435.html Sent from the Point Cloud Library (PCL) Users mailing list mailing list archive at Nabble.com. _______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers _______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
Administrator

In reply to this post by Rather Not
You don't need to work out any matrices manually. This is what I am using:
CloudPtr cloud1Cropped(new Cloud); CloudPtr cloud2Cropped(new Cloud); CloudPtr unifiedCloud (new Cloud); pcl::transformPointCloud(*cloud1Cropped, *cloud1Cropped, pcl::getTransformation(calib.cam1.x, calib.cam1.y, calib.cam1.z, pcl::deg2rad(calib.cam1.xRot), pcl::deg2rad(calib.cam1.yRot), pcl::deg2rad(calib.cam1.zRot))); //pcl::getTransformation(x, y, z, roll, pitch, yaw) pcl::transformPointCloud(*cloud2Cropped, *cloud2Cropped, pcl::getTransformation(calib.cam2.x, calib.cam2.y, calib.cam2.z, pcl::deg2rad(calib.cam2.xRot), pcl::deg2rad(calib.cam2.yRot), pcl::deg2rad(calib.cam2.zRot))); //pcl::getTransformation(x, y, z, roll, pitch, yaw) *unifiedCloud = *cloud1Cropped; *unifiedCloud += *cloud2Cropped; 
Turns out using this simple process and the parameters x y z omega phi kappa produced with Agisoft I'm not even roughly aligning my clouds.
The positions of the cameras are correct, so obviously my way of handling these parameters is wrong. (These parameters are expressing the position and orientation of the Kinect cameras, with the origin inside of the circle they are disposed on. What I got is the clouds of each Kinect in it's own referential, about one meter in front and hopefully all the Kinects are pointing toward the center What I want is all clouds mostly registered) How would you use these parameters to approximately register my Kinect clouds? A function to input an origin and a quaternion seems to exist solely for visualisation purpose, it would otherwise seem like a good approach or does it and I'm missing out? 
I'm not sure if I'm explaining myself clearly on this issue,
Maybe "Update origin and orientation for a point cloud" would be more accurate? 
Free forum by Nabble  Edit this page 