I have two primesense RGB-D sensors that are placed opposite to each other, so that they capture the top and bottom views of an object placed between them. I have calibrated the two cameras so that they are in the same co-ordinate system and hence ultimately the point clouds also lie in the same co-ordinate system.
My main aim is to determine the depth of the object using the top and the bottom views and the corresponding point clouds.
Is there a way to pass a perpendicular ray from one point cloud to the other point cloud and fill the space between the two point clouds with more points, such that it would ultimately lead to the entire 3d object to be reconstructed using the point clouds of the top and bottom views?
I understand you camera setup but I'm not sure I've understood your question;
You have 2 points clouds:
- Top surface
- Bottom surface
If you want to reconstruct an object with these (incomplete) information here is what I would do:
- Triangulate bottom and top surfaces completely
- Detect points that lies on the border of the bottom/top surfaces
- For each point of the bottom surface BORDER find the closest one in top surface BORDER etc.. and triangulate that way.
I am a bit unclear about calculating the point cloud borders.
At the moment I convert the point cloud into range images and then find the borders. Is this correct?
Or should I use pcl::BoundaryEstimation?
To find the closest point point between the borders, do I iterate throughout the points and find the closest one? Or is there any other PCL functionality that I may use?
Can you please elaborate a bit on the third point? How do I triangulate after finding the closest point between the top and bottom surface borders?
I use greedy projection for triangulation. I save the polygon mesh as a vtk file as well as a ply file and tried visualizing it. However, the mesh is not visible. Can you please suggest what may be wrong. I use the same code as mentioned in the tutorial.
I am attaching the image of my result.
I have a small query. I have the borders of the point clouds. Will it be possible to triangulate between these now using addface()? As far as I understood addface() needs vertexIndex as input which come form the half edge mesh?
I have another query regarding the same problem.
So now I have a point cloud (pcl::PointCloud<pcl::PointXYZ>::Ptr). Now I want to genetrate faces within this point cloud using addFace(). For this I create a mesh (typedef pcl::geometry::PolygonMesh <pcl::geometry::DefaultMeshTraits<std::vector<int> > > Mesh). Using this mesh I try to access the vertices. However, I do not understand how should I do that. Can you please suggest something. Please find below my code: