Wish list for algorythm

Hi Guys,

I will be attenting a meetup in Paris with many people from all over, that are ready to work on algorithm projects.

Presenting MyRobotLab and  InMoov, is the occasion to recruit members on the project in various fields.

Igor Carron, the organizer of the meetup, proposed to launch a wish list of what could be great , necessary, futurist to progress with MyRobotLab and InMoov.

Lets not be worried about barriers, what could be achieved will be done. Just add your wish to the list:

-It would be great to be able to calibrate all InMoov default positions using the Kinect in coordination with the eye camera in some way.

-It would be great if the Kinect could filter the environment to be able to select only one skeleton. (in crowd) maybe a centering definition at initialization...

-It would be great to have a sort of "pixy" board Open Source on wich we could plug any type of USB webcam to pre-treat data before sending it to MRL. (this more hardware)

-It would be great if InMoov could use Mr Turing of MRL as basic AI. (I think that is already possible)



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
kwatters's picture

Wish list tracking

Hi Gael,

  We have started to use the "issues" feature in GitHub.


  We can just make up issues for any sort of feature or algorithm that we want to add.  Feel free to have a look, right now they're mostly for tracking and fixing bugs that we find, but we can also use it for new features.

  I like it because it's just built into the github page.


As for some comments about the features you're talking about

1. yes auto-calibration would be awesome!  maybe the inmoov can look into a mirror and learn its resting position.

2. we can try to look at improving the skeleton tracking and detection, but we might have to reach out to the OpenNI project for help there.

3. I'm pretty sure MastaBlasta just got a pixy camera, I'm sure we'll learn more about it.

4. ProgramAB handles the basic AI stuff and powers Mr.T on the shout box.  I have some example scripts for lloyd that use it.  (speech recognition is still using sphinx, but Ma.Vo. built an android app that uses google's speech recognition, mastablasta was also using that in his videos.



GroG's picture

1. Lets start  !  step 1 -

1. Lets start  ! 

step 1 - One Script to maintain  - many auxillary scripts ...  It needs to be decomposed and re-structured so that anyone from a finger starter to a full blown InMoov with mobile platform can use it in a similar fashion.

We need to start thinking of the giant scripts as modules.. different modules can be utilized depending on completion, or availability of specific equipment (sensors, mobile platform, etc) ....   one such module would be configuration .. a configuration routine could "generate" the configuration module and then it would be used for that particular build's calibration..

Top Code, Mirror, Kinect all could be used for the "generation" routine

2. There are a variety of strategies in selecting a specific person .. OpenNI does this already to some extent - I don't know how accurate it is nor all its limitations, but OpenNI/Nite2 will give you a skeleton index.  This could theoretically be combined with OpenCV Recognition filter to "lock" on a particular person.

3. Borsachi06 has a pixy - his review has not been that good so far - but I'm familiar how many people find sensors in general deficient .. turns out they rarely work "perfectly" .. often it takes more effort to filter out noise and refine  data..

Alessandruino's picture

-accurate TLD tracking :

-accurate TLD tracking : predator style (object recognition based on classifiers)

- surf filter in OpenCV

- low-latency and accurate free form speech recognition (multi language if possible)

- SLAM (simultaneously localization and mapping) using kinect

Of course all of this must be done inside myrobotlab...

Gareth's picture

Auto_Calibration..... Wii_Mote camera style

How about fixing InfraRed Leds on each of Inmoovs Joint pivots...

Use a camera with InfraRed filter to "Stand out" the LEDS and bingo you get instant location of joints.

i.e. it will look like a dot framework skeleton... which with some "Algorithms" the angles and positioning of InMoov limbs could be worked out.

I suggest this because I have had experience using a hacked Wii_mote camera that is able to detect 4 distinct InfraRed sources. Not only can you detect in the XY plane but also depth.

If a Webcam + strong IR filter is used pointing at InMoov then many more IR_leds could be detected.

Would take quite a bit of Trig. for calculations however I am sure it could be done.

Here is the basic idea... off one of my earlier projects :-

hairygael's picture

Mmmh that is good wishes.You

Wawooo that is good wishes.

You guys have better algorithm request than I do.

-Could the shoulder axis problem with the kinect be solved through algorythms?

-For calibration I was thinking of creating a 3D printed bracelet with four black dots. The bracelet  could set on the hand of InMoov, this way the Kinect camera could measure perspectives when calibrating the arms.


GroG's picture

Topcode Tattoos !

Topcodes are already borg'ed into MRL.   They potentially give X,Y,Z position if viewed straight on. Also rotational information.  The challenge with Topcodes is they have to be viewed straight on.  

We could build an Algorithm on top of Topcodes, which puts an InMoov with Topcode Tattoos on its joints to move to a starting postion and rotate or move until a topcode is found.  Once its found we can save this as calibration information..


Gareth's picture

IR aware - motion capture style

Basic principle :-

Add a handfull of Infra Red Leds to basically mid limb parts.

A camera is pointed at InMoov from the from straight on.

Note Below is the Base calibration position.

Note the vertical line of 5 leds

Note the horizontal line of 4 leds

Note the inverted Triangle on chest

This arrangement is the key to the calibration. (or at least one variation)

Its the job of the programs algorythms to align Inmoov to the vertical and horizontal array of Leds.


for example shoulder error (nipple leds not in line with upper/lowerarm leds) :-


or Head rotation error :-



Torso rotation :-


torso leaning :-


IR leds show up well on any camera ...even a kinect.

Just an idea.... along the lines of motion capture..

(it could be possible to place reflective markers... and flood InMoov with a 3W IR Led.... then the reflextive blobs will shine bright to.)