I'm trying to use PCL, however I'm not using a depth-sensing camera. Instead I have a simple webcam that detects the static targets using opencv and extracts their position from the image to the world reference frame. Since I'm using ROS, I have a node that outputs the position of the targets being detected at the moment (the targets the camera has in its view).
This means that my point cloud is actually built over time from the various samples/detections of the target position (since the camera is moving and the its position has some errors, the target position detected has noise).
I thought about using some clustering algorithm in order to estimate the position of the targets. Following that idea I used the EuclideanClusterExtraction in order to extract clusters from the point cloud and compute their position estimate using compute3DCentroid. However, since this method is not intended for this use, if the two clusters are relatively close to each other, they will be detected as one.
I'm looking for some sugestion to solve this kind of problem and/or improve the way I'm doing the position estimate for the targets.