I have an organized coloured point cloud, taken with an asus xtion pro live, which I am saving as a png using: pcl::io::savePNGFile("cloudRGB.png", *cloudRGB, "rgb").
Since it is an organized point cloud, it has the same dimensions of an image: 480 x 640. I had the idea that the points have some relation with the pixels, therefore the point 147.506 was approx. equivalent to the pixel [308,229] (147.506/480=307,3 and 147.506/640=230,5). This idea worked for that specific point, which is close to the centre, but I tried the same with another point closest to the nose (145.029) and it did not work.
If you look at the images, it is interesting that the png file doesn't have the white space around the person as the pointcloud does. I have tried to look into the code but I haven't been able to understand how the pointcloud is converted into a png file, someone knows about it?
I also thought about projecting the 3D points into 2D image coord,, but either my equations are wrong or it is not how it is done. Moreover if this is the way used to get the png file how are the NAN points treated?
There is a direct relation between the point cloud and the "depth map", which is a 2D image representing depth (encoded on gray levels / color).
From a certain perspective (the sensor origin) the point cloud looks like an image (you can't see the holes produced by occlusion).
The image is organized in row and columns, the point cloud as a list.
Note that the Asus Xtion sends depth maps through the USB and then PCL converts them to point clouds by projecting the depth map into space (given the camera focal length etc.).
Have you tried with the image size as 640 x 480 (and not 480 x 640), it is just a matter of getting the rows/column in the right order. I'd suggest working with the top right corner (should be point 640 from what I remember).
Usually NaN points are treated as a special color (pitch black for example) to demonstrate that there is no value.