BoofCV point cloud

A small demo of using BoofCv to combine the depthmap and the videostream to get a realtime pointcloud.

 

https://github.com/MyRobotLab/pyrobotlab/blob/develop/home/Mats/OpenKinectPointCloud.py


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
GroG's picture

Great Demo Mats ! Your lookin

Great Demo Mats !

Your lookin good ;)

I looked a little at your class and it would be helpful if the static loading of natives could have a maven dependency or even java-cpp build.

It looks like there is low latency in display - is that true ?

Its displayed by BoofCV too ?

What were the two demos - one did a point cloud the other did an overlay ?

Cool Stuff Mats ! ;)

Mats's picture

Examples

I found the examples in org/myrobotlab.boofcv

The first example I used was DisplayKinectPointCloudApp. It creates a pointcloud from two images. One image is the depthimage and the second is a normal picture. It's in the org.myrobolab.boofcv folder in MRL.

The original example is here: https://boofcv.org/index.php?title=Example_Point_Cloud_Depth_Image

The second example I used is OpenKinectStreamingTest. It reads both the video and dept streams and show them in two separate windows. It's also in It's in the org.myrobolab.boofcv folder in MRL.

The original example is here: https://boofcv.org/index.php?title=Tutorial_Kinect

I merged them and created OpenKinectPointCloud. I think this is very similar to what Alessandruino did a few years ago: http://myrobotlab.org/content/kinect-fun-mac-using-libfreenect

The loading of freenect was just in the example code from Peter Ables. I tried it without it, and it still finds the freenect libraries. What I have done so far is just a simple test.  

The latency is very low. Just a few ms. 

The viewer is part of BoofCv.

In short:....

viewer = VisualizeData.createPointCloudViewer();

<generate a List of <Point3D_F64>points and int[] colors>

viewer.addCloud(points, colors);

ShowImages.showWindow(viewer.getComponent(),"Point Cloud", true);

 

 

GroG's picture

Worky ! It crashes sometimes

Worky !

It crashes sometimes in the viewer here

@Override
public void addCloud(List<Point3D_F64> cloudXyz, int[] colorsRgb) {
if( cloudXyz.size() > colorsRgb.length ) {
throw new IllegalArgumentException("Number of points do not match");
 
But it was very satisfying to run your script and see my office as a colored 3d model 

 

Mats's picture

Odometry

I have been playing around with BoofCv for a few hours. And it has some nice methods to create a pointclod from the depth and camera streams. That is what I used to create the video here:

http://boofcv.org/javadoc/boofcv/alg/depth/VisualDepthOps.html

Currently trying to use Odometry. I.e getting the camera movement based on comparing two images to get the translation matrix.

https://boofcv.org/index.php?title=Example_Visual_Odometry_Depth

The bug that you describe has been fixed :)

GroG's picture

Thanks ;) The conversion from

Thanks ;)

The conversion from spherical to cartesian coordinate has been giving me a headache, but BoofCV solved it quite well.  Now its a matter of finding the details in Boof

RemoveRadialPtoN_F64 looks promising.

I'm really excited about your odometry work.   

What do you think of a library agnostic point cloud definition ?   Boof has some really incredible work, and to me, so far it seems very well designed.  I'm interested in getting information into JMonkey, but I still think its work JMonkey "no knowing" about Boof.  Do you have any thoughts from your work so far ?

Mats's picture

Visual Odometry

BoofCv has so much in it. So I downloaded one of the examples from BoofCV https://boofcv.org/index.php?title=Example_Visual_Odometry_Depth and adapted it a little bit.

It's now available in the boofcv folder in MRL.

I then created a new program OpenKinectOdometry, that creates two windows. 

One of them is supposed to show the same pointcloud as in OpenKinectPointCloud.

The other window is to test out what happens when I try to apply the transformation / rotation from the Odometry to the pointcloud and then view it.

In the processOdometry() method, I try to apply the translation / rotation using a transformation matrix.

You can see that I first build the Transformation matrix from translation / rotation that I get from the Odometry. But I overwrite it later with a Unity Matrix. I just do that for debugging. Comment out either of them depending on what you want to do.

I'm on deeep water here. But at least I get the same pointcloud back when I apply a Unity Maxtrix. But as soon as I try any translation, I get strange results. The pointcloud shifts, but it also seems like some points in the pointcloud dissapears. I uploaded my experiments, so that you can see where I am at.