Hi GroG,
Have you implemented Kinect into InMoov or is that a forword looking project? If so, what do you have it doing? Is the new kinect one usable or not open sourced yet? What do you hope or plan for kinect to do?
Does InMoov MRL make use of two cameras in eye sockets or just one?
Is there a reason to have both kinect and cameras?
Sould I learn Py?
thanks
Hello WKinne & WelcomeWhat
Hello WKinne & Welcome
InMoov using Kinect is a project we are all interested in doing. MRL "had" Kinect in it, but the details of the libraries, and drivers leave something to be desired. About 4 people have gotten Point Cloud to work in MRL besides myself. http://myrobotlab.org/content/point-cloud-fixed
The problem with it currently, is that the display is way too slow.
The current interest is to get Gesture Recognition working (Simon Says Game). Right, now I'm the primary developer and my kinect is broken :(
Kinect Sensor data is expected to be used to do intelligent inverse kinematics & SLAM
Alessandruino & Gael both have working kinects
Just one at the moment, but MRL's OpenCV has stereo disparity function which could be used in the future to aid in ranging and other info.
Short answer, more sensors are usually "better" -
Don't think what you can do with "either/or" rather think what you can do with them "together" .. It's 2 different angles of view which aids in positioning in 3 dimensions. I am trying to make the software so that it "does the best it can" with what sensors it has. The design strategy is that it works with one or the other or both... the lacking of a sensor means it can do less intelligent decisions, but it does the best it can with what it has...
Yes, by all means.
There's a GUI to MRL - but if you want more control, you can use Python. If want even more control? use Java... more control than that, you'd have to use the Force.