Would it be possible to detect edges in
an unorganized cloud?
I have researched the organized cloud method described at http://www.pointclouds.org/blog/gsoc12/cchoi/index.php. Edge detection is described in Radu Rusu's dissertation by finding the highest curvature values. Is there a PCL implementation of this? Thanks, Brad ** This email and any attachments may contain information that is confidential and/or privileged for the sole use of the intended recipient. Any use, review, disclosure, copying, distribution or reliance by others, and any forwarding of this email or its contents, without the express permission of the sender is strictly prohibited by law. If you are not the intended recipient, please contact the sender immediately, delete the email and destroy all copies. ** _______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
Did you manage to get a solution to this? I am facing a same issue. Any leads?

I'm stuck at a similar issue, and I'm wondering if there's been any progress on this.
I have an unorganized 3D point cloud (with color) and I want to perform edge detection on it. Are we still missing an edge detector for unorganized point clouds? I see that there is an implementation of edge detection for organized point clouds (http://www.pointclouds.org/blog/gsoc12/cchoi/index.php). I can think of a quick and dirty workaround for adapting the available edge detector for unorganized point clouds: Given a view and a camera matrix, one could rasterize the 3D point cloud from that view, determine the 3Dto2D correspondence locations in the image coordinates and then use that information for converting the unorganized cloud to an organized one... However, as I mentioned, this is quick and dirty and would be dependent on having knowledge about camera views. Can somebody chime in on this please? Thanks in advance! 
This post has NOT been accepted by the mailing list yet.
Dear SirM2X
How about this approach: You have a point type x,y,z,n with n being an int larger or equal zero. If you want to be in agreement with templated pcl then you need to pass your function n in form of st::vec<int> or similar with same indices. You voxel downscale your point cloud. You need to write here into n for each generated point the number of points it was downscaled from. I think PCL cannot do this, I use OpenCV sparse matrices for this downscaling purpose. I think it is also better to downscale in this case to the voxel cell center instead of the center of mass. You use this "n" as a greyscale value after that. Not really greyscale, but it is a weight and we can think of our cloud now as a 3d greyscale image. (No downscaled point zero = black) I know in 2d matrices like for example (Soble opertator see here: https://en.wikipedia.org/wiki/Sobel_operator and also maybe here: https://en.wikipedia.org/wiki/Corner_detection) exists so I assume you can transfer this 2d stuff also to 3d (for Soble it should be in theory trivial, but instead of 8 matrices to sum up you will have 26. Might be a sensitivity problem. I have no clue what is already there in point cloud edge detection theory. That was brainstorming idea and you should carefully look up to not reinvent the wheel. Yesterday night I explored a bit OpenCV API more and it might also bring some ideas to your mind. You can also compute if your data allows (rooms or buildings might be best) to fit RANSAC planes and check if the intersection region (linear algebra derived line) is supported by enoughh points. Supported means the maximum distance between a point and the two planes is more than a threshold. Jan 
In reply to this post by SirM2X
> I can think of a quick and dirty workaround for adapting the available edge > detector for unorganized point clouds: > > Given a view and a camera matrix, one could rasterize the 3D point cloud > from that view, determine the 3Dto2D correspondence locations in the image > coordinates and then use that information for converting the unorganized > cloud to an organized one... However, as I mentioned, this is quick and > dirty and would be dependent on having knowledge about camera views. > > Can somebody chime in on this please? Well, just letting you know nevertheless. Cheers, Sérgio _______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers smime.p7s (4K) Download Attachment 
Hi guys!
Any updates in edge detection for unorganized point cloud ? Thanks! Lucas 
two methods I use for finding edges in unorganized clouds:
1) Inverse statistical outlier removal. Primary issue of this is that you will have to tune the percent cut to manage how much edge vs interior surface you have in your cloud which is not very scalable (works well for relatively known object or scene). 2) *my current fav* For every point do a nearest neighbor radius search. Then draw vectors from that center point to all of the neighbors. Take any one of those vectors and by measuring the angle between it and all the others, generate an average vector direction. Then look for the maximum angle deviations from that average vector in both CW and CCW (ie look for the dead zone) and you set your edge specificity based on how large of a dead zone arclength you want. This is actually super scaleable and pretty fun to play with. 
Hi!
I am now also interested in edge detection in unorganized point clouds. @ Sneaky Polar Bear: Do you have some code that you could share with us to do your *favorite method*? That would be greatly appreciated. 
This is not directly runnable (you will need to build or implement a 3d vector class). Also, I only built it to the capacity that I needed and planned on expanding it later, so it has some bits (like the vertical z axis reference) that need to be implemented as adaptive and such.
Regardless, it should give you a start/ idea of what I was describing above. void PCL_Util::planarEdgeExtractionV2(pcl::PointCloud<pcl::PointXYZ>::Ptr &targetCloud, double minInteriorAngleRad) { pcl::KdTreeFLANN<pcl::PointXYZ> kdTree; kdTree.setInputCloud(targetCloud); pcl::PointCloud<pcl::PointXYZ>::Ptr edgePts(new pcl::PointCloud<pcl::PointXYZ>); for (int i = 0; i < targetCloud>points.size(); i++) { std::vector<int> indicies; std::vector<float> distances; std::vector<Vector3> ptDirections; pcl::PointXYZ tp = targetCloud>points.at(i); Vector3 referenceDirection = Vector3(0, 0, 0); double averageAngle = 0; double count = 0; kdTree.nearestKSearch(i, 24, indicies, distances); for (int ii = 0; ii < indicies.size(); ii++) { if (i != indicies.at(ii)) { pcl::PointXYZ cp = targetCloud>points.at(indicies.at(ii)); Vector3 currentDirection = Vector3(tp.x  cp.x, tp.y  cp.y, tp.z  cp.z); if (referenceDirection.Magnitude() == 0 && currentDirection.Magnitude() != 0) {referenceDirection = currentDirection;} if (currentDirection.Magnitude() != 0) { double currentAngle = referenceDirection.DirectionalAngle(currentDirection, Vector3(0, 0, 1)); averageAngle += currentAngle; count += 1; } } } averageAngle = averageAngle / count; //apply correction rotation to reference vector to place it at the directional average referenceDirection = referenceDirection.Rotate(Vector3(0, 0, 1), averageAngle); referenceDirection.Normalize(); double maxAngle = M_PI; double minAngle = M_PI; indicies.clear(); distances.clear(); kdTree.nearestKSearch(i, 24, indicies, distances); for (int ii = 0; ii < indicies.size(); ii++) { if (i != indicies.at(ii)) { pcl::PointXYZ cp = targetCloud>points.at(indicies.at(ii)); Vector3 currentDirection = Vector3(tp.x  cp.x, tp.y  cp.y, tp.z  cp.z); if (currentDirection.Magnitude() != 0) { double currentAngle = referenceDirection.DirectionalAngle(currentDirection, Vector3(0, 0, 1)); if (currentAngle > maxAngle) { maxAngle = currentAngle; } if (currentAngle < minAngle) { minAngle = currentAngle; } } } } if ((2.0*M_PI  (maxAngle  minAngle)) > minInteriorAngleRad) {edgePts>points.push_back(tp);} } pcl::copyPointCloud(*edgePts, *targetCloud); } 
@ SneakyPolarBear. Thanks for sharing your code. I had revisited this thread a couple of times but never saw your reply with the code. Maybe there was something strange with my browsercache. I hope I can implement it into my code.
Cheers! 
In reply to this post by Sneaky Polar Bear
Dear Sneaky Polar Bear, Got a look at your code, for which I am thankful that you shared it with us. Just wondering when implementing the method DirectionalAngle: vector_ref.DirectionalAngle(vector_current, Vector3(0, 0, 1)) How is this one computed? Many thanks for the clarification in this respect. Kind regards, Filip Rooms On 14Jun17 19:50, Sneaky Polar Bear
wrote:
This is not directly runnable (you will need to build or implement a 3d vector class). Also, I only built it to the capacity that I needed and planned on expanding it later, so it has some bits (like the vertical z axis reference) that need to be implemented as adaptive and such. Regardless, it should give you a start/ idea of what I was describing above. void PCL_Util::planarEdgeExtractionV2(pcl::PointCloud<pcl::PointXYZ>::Ptr &targetCloud, double minInteriorAngleRad) { pcl::KdTreeFLANN<pcl::PointXYZ> kdTree; kdTree.setInputCloud(targetCloud); pcl::PointCloud<pcl::PointXYZ>::Ptr edgePts(new pcl::PointCloud<pcl::PointXYZ>); for (int i = 0; i < targetCloud>points.size(); i++) { std::vector<int> indicies; std::vector<float> distances; std::vector<Vector3> ptDirections; pcl::PointXYZ tp = targetCloud>points.at(i); Vector3 referenceDirection = Vector3(0, 0, 0); double averageAngle = 0; double count = 0; kdTree.nearestKSearch(i, 24, indicies, distances); for (int ii = 0; ii < indicies.size(); ii++) { if (i != indicies.at(ii)) { pcl::PointXYZ cp = targetCloud>points.at(indicies.at(ii)); Vector3 currentDirection = Vector3(tp.x  cp.x, tp.y  cp.y, tp.z  cp.z); if (referenceDirection.Magnitude() == 0 && currentDirection.Magnitude() != 0) {referenceDirection = currentDirection;} if (currentDirection.Magnitude() != 0) { double currentAngle = referenceDirection.DirectionalAngle(currentDirection, Vector3(0, 0, 1)); averageAngle += currentAngle; count += 1; } } } averageAngle = averageAngle / count; //apply correction rotation to reference vector to place it at the directional average referenceDirection = referenceDirection.Rotate(Vector3(0, 0, 1), averageAngle); referenceDirection.Normalize(); double maxAngle = M_PI; double minAngle = M_PI; indicies.clear(); distances.clear(); kdTree.nearestKSearch(i, 24, indicies, distances); for (int ii = 0; ii < indicies.size(); ii++) { if (i != indicies.at(ii)) { pcl::PointXYZ cp = targetCloud>points.at(indicies.at(ii)); Vector3 currentDirection = Vector3(tp.x  cp.x, tp.y  cp.y, tp.z  cp.z); if (currentDirection.Magnitude() != 0) { double currentAngle = referenceDirection.DirectionalAngle(currentDirection, Vector3(0, 0, 1)); if (currentAngle > maxAngle) { maxAngle = currentAngle; } if (currentAngle < minAngle) { minAngle = currentAngle; } } } } if ((2.0*M_PI  (maxAngle  minAngle)) > minInteriorAngleRad) {edgePts>points.push_back(tp);} } pcl::copyPointCloud(*edgePts, *targetCloud); }  View this message in context: http://www.pclusers.org/EdgeDetectioninUnorganizedPointcloudtp4038708p4044605.html Sent from the Point Cloud Library (PCL) Users mailing list mailing list archive at Nabble.com. _______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
_______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
@SneakyPolarBear: could you point me to how this directional angle method is implemented? An angle wrt one other vector by taking the dot product is clear enough, but wrt two other vectors (which may not be orthogonal?) is not clear to me... Could you help me here?Kind regards, Filip On 29Jun17 12:07, Filip Rooms wrote:

_______________________________________________ [hidden email] / http://pointclouds.org http://pointclouds.org/mailman/listinfo/pclusers 
Sorry for the delay, here is the function you asked for. I am like 90% sure eigen has something for this in their library, but it was faster to write than find. Basically it is just giving magnitude with sign to rotation (you have to have sign instead of just angle between if you want to do a full circle average)
double Vector3::DirectionalAngle(Vector3 v2, Vector3 rotAxis) { double unsignedAngle = Angle(v2); double signedAngle; Vector3 anglePlane = CrossProduct(v2); double axisSimilarity = rotAxis.Angle(anglePlane); if (axisSimilarity < M_PI_2) {signedAngle = unsignedAngle;} else {signedAngle = unsignedAngle;} return signedAngle; } 
Free forum by Nabble  Edit this page 