Just work directly on the grubber buffers extracting by yourself the depth and rgb images. This way you can put them directly in the cloud in an organized way (as a matrix in a vector; the usual c style) and use them in opencv then.
ex: https://github.com/giacomodabisias/Pcl_OpenNI2_wrapper in that case the data is just pushed in the vector. in your case you should allocate first the needed space in the vector and then use the [ ] operator to put the points in the right place :)
In the grabber callbacks, the PCL openni_wrapper::Image/DepthImage objects should have a way to get a pointer to the underlying pixel buffers. You can use that pointer to construct a cv::Mat object that references the same data. This should be a direct depth image, not a disparity map.
With the OpenNI 1.x wrapper, I think the correct calls would be [object].getMetaData().data() and [object].getMetaData().getDataSize() to get the pointer and buffer sizes.
If you want all three forms of data (depth, color, point cloud), there are no current callbacks for this. Your best bet would be to get the color and depth images (which can be synchronized) and construct the equivalent point cloud yourself. The relevant code is in the protected function pcl::OpenNIGrabber::convertToXYZRGBPointCloud(...).