Well, as we open the gateways to blender from MRL, it seemed like a pretty obvious leap to start trying to render point cloud data from the kinect via OpenNI in blender. I had the kinect write out to a text file all the x,y,z points. In a small python script in blender, I loaded that text file and rendered small circles for each point.
Here's a very rough scan of my office with an InMoov on the desk. The scan is pretty rough, but you can definitely see the depth data is there.
Right now, it's super slow to render the cloud. I down sample the data so it actually finishes rendering. I'm sure there are other ways to render this data from python in blender.
Here's the current blender script. the "point_cloud.pcd" is a text file where each line represents an x y z point.
Very Cool Kwatters ! Some
Very Cool Kwatters !
Some ideas :
https://openslam.org/rgbdslam.html
Its seems pretty small footprint - possibly a relatively easy port of cpp