I found this here http://www.gurucycling.com/wp-content/uploads/2013/09/Kinect-for-Positional-Data-Acquisition.pdf
And tried to represent it accurately in an OpenCVFilterPointCloud.java with the following code
zw = -1 * (float)depth/1000; xw = 6 - 2 *(xv - 639/2)*Math.tan(57/2 * 0.0174533) * (zw/640); yw = 4.5 - 2 *(479 - yv - 479/2)*Math.tan(43/2 * 0.0174533) * (zw/480); points[yv][xv] = new Point3df((float)xw,(float)yw,(float)zw);
And what do I get ?
A widdle green dot ... DOH !
So scaling is a problem ... I experimented with trying to scale it correctly but still things seemed out-of-whack
I also am curious about the functions for the kinect (although this seemed to be the most concrete example with intrinsic kinect values)
Specifically,
Seems silly to be subtracting a fractional constant .. perhaps what the author meant to say is
(Pixelx - 639)/2 and 479 - (Pixely - 479)/2
not really sure ..
Anyway - when all examples fail, and nothing seems to work - time to go back to very controlled experimentation. Where did put my Greek cross..
GrogYour unit value are in
Grog
Your unit value are in meters, your viewer probably use millimeters. so just multiply your value by 1000 to have it in millimeter.
the Pixelx - 639/2 and similar constant is to adjust so the origin (x = 0 and y = 0) are in the center point of the image and you get a mesure relative to the position of the kinect (negative value are to the left and below, or the reverse I dont remember)
and if you read the text carefully, the example state that the kinect is at position x=6m, y=4.5m, z=0) so remove the 6 and 4.5 from the equation if you want it relative to the kinect
Hi Calamity .. yup .. there
Hi Calamity ..
yup .. there are so many places to adjust the relative scale .. both in the point cloud and in jme settings
I started looking through jmonkey setup - and I think a lot of it was set for the blender imports of inmoov (which makes sens)
Now I'm looking to take out rootNode.scale(.5f) and all translation and see if I can start making steps in the right direction..
abstract unit = 1 :) I have
abstract unit = 1 :)
I have to start simple ;)
this is what we have so far ...
Jmonkey :
rootNode.scale(1.0f)
camera location is 0, 0, 12
I put a red wire cube 1 X 1 X 1 on the origin .. I can see the pointcloud upper right and some silly blue boxes (I should clean up my virtual room ;)
With that formula .. in theory without scaling modifications it "should" be close to fitting inside the red box (ish)..
Ok .. lets start moving it ...
Left is Right ... Right is ?
Ok .. looks better(ish)
JMonkey has a different coordinate plane than the kinect .. not really a big surprise so, I think I'll adjust this on the point cloud loading side..
This I think is jmonkeys documented coordinate plane ..
but this points to some more "clean-up" I should do ..
JMonkey's convention is to have a "Mesh" accept a FloatBuffer of triplets x, y, z.
more hacking required ...
I did some cleanup - its
I did some cleanup - its still distorted alot around the "edges" - but towards the center it looks pretty ok
(No subject)