[Inmoov Script] Merge Them All !

[WIP status] :

- gestures integration
- Spanish translation
- Automatic attachement/detachement with the help of autoattach()
- Chatbot corrections


Because standardization is the key of great things this base community script is born.

The idea is to make a universal & international Inmoov script,first for beginners with basic functionalities, then with full extended functions from many myrobotlab services
The merge will start from the work of Hairygael ,Kwatters and some pieces of mine. And will be actively enhanced with anyone who want to bring ideas. With the help of java kings .
The finality is to publish a powerful script ,clean, standardized , easy understandable, smallest possible, fully documented and of course WORKY FOR ALL

To give Inmoov, a great life


Bugs can be opened here : https://github.com/MyRobotLab/inmoov/issues



1/ Download & update
JAVA - https://www.java.com/fr/download/
CHROME - https://www.google.fr/chrome/browser/desktop/index.html ( set it default )
ARDUINO - https://www.arduino.cc/en/Main/Software
2/ Set the Port com of your Arduino(s) in device manager to 115200 BAUD.
3/ Create a new folder [mrl] on root of your disk
4/ Download MRL and put inside c:\mrl :
STABLE : https://github.com/MyRobotLab/myrobotlab/releases
LAST BUILD : http://mrl-bucket-01.s3.amazonaws.com/current/develop/myrobotlab.jar
5/ Download script : https://github.com/MyRobotLab/inmoov/archive/master.zip and extact like this :


6/ Click START_INMOOV.bat and wait a little
7/ Close MRL and put MRLcomm.ino in your arduino from C:\MRL\resource\Arduino\MRLComm
You can setup your arduino port ,language,voice by editing inmoov.config in InmoovScript folder
And .config files in inmoovSkeleton folder


Folders : https://github.com/MyRobotLab/inmoov/wiki/FOLDERS-DOCUMENTATION
Config Files : https://github.com/MyRobotLab/inmoov/wiki/CONFIG-FILES-DOCUMENTATION






Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
calamity's picture

It's an excellent idea. I

It's an excellent idea.

I read your discussion the other day you had with Gael and I had some idea  about it

I think it would be great to have a system as modular as possible and plug and play. I mean if you want to use a module, or a service, you download the script to a folder, maybe edit the config for it and just lauch the main script. 

Let me give an example.

a new builder just complete the left arm and want to test it. He download the inMoov package, the left arm module/script and edit it's config and just lauch the program (thru a .bat file or directly in MRL). 

Then he buid the left hand, download the left hand module and nothing more is need to have the arm and hand working together.

A system like that will be ideal for beginner as there is no scripting to do or script to merge.

This can be easily done the same way as the gestures are load. 

I build my own inMoov script a bit in that way, but it's still not fully "plug and play"


and module script



Just an idea, but I think there is a lot of potential going that way


moz4r's picture

Thank you for this great

Thank you for this great idea! It is a good option like you say to plug and play services.
About this functionality do you think it is better to have only one config file or one config file / service
We need to take care about general script that coordonate optional services ( like change neopixel action if we have loaded the service script or not , but just conditions to add )

Do you think we need to decompose every members of inmoov ? or only left / right side.

About config files I use inclusion of python script but a configparser will be definitly cleaner, no ?


calamity's picture

having 1 config files or

having 1 config files or multiples have both it's advantage and inconveniant. The advantage of 1 config file is that everything is at the same place, the disavantage is when it become big, it could get hard to manage.

So it's mostly about searching a file or searching in a file.

I have always been an adept of modularity, make black box of code with minimal dependancy between each other. It's more work to setup, but when it's done well, it's much easier to maintain and upgrade as you only have to take care of a box at time, instead of playing with a very large box. 

So for the short answer, for me it seem more natural to have config base on services/module than one big config file.

Of course it will need some planning to find the best way to coordinate services, but that should not be too hard.

about configparser, I agree with you that it's generally more cleaner, but if we go with config file for each service, the configs will remain small and it may not need a parser. All depend on how you want to do as config format.


GroG's picture

All good ideas. Plug and

All good ideas.

  • Plug and play
  • Per service Configuration
  • Modularity

Another good design mantra is, "Convention over Configuration"
Which means with the very minimal amount of user intervention things are "worky"
Configuration should always have "best guess" worky values.

I'm pretty excited about building/shipping a "One InMoov Script which Rules them All" inside InMoov, similar to MrlComm.  Not only will the script be unified with the exact version it came with - but it allows us to do more meaninful auto-mated testing.  When we start this script, we can have Travis excercise many tests using the script its shipped with ...  making Stronger Worky !

moz4r's picture

Hi ! first worky version, is

Hi ! first worky version, is it a good direction ?

Version 0.0.1


- Tiny startup script
- MarryTTS voice download
- Globalized
- Auto diagnostic
with Subconscious mouth, this is an always worky voice used to auto diagnostic
- NaturalReader as option
- Per user config ( with ConfigParser to store personal parameters )
- Service folder to put python services related files inside
- MRL + Script Launcher
download mrl services
myrobotlab.log delete at startup


- Check if mrl version is not too old
- Check MRLcomm version


moz4r's picture

version 0.0.3 - Create

version 0.0.3
- Create InmoovSkeleton folder
so we can chose witch part of inmoov we start. By default it is
"fingerstarter mode" > RightSide + isRightHandActivated
- Add Right Hand skeleton part with servo config file

TODO ear.commands about skeleton, find a place to sort ear commands

So, no need to create different script files , good or bad idea ?

calamity's picture

Hi anthony I did not try you

Hi anthony

I did not try you scripts yet but get a quick look at it. I'm impressed by your python fu.

I really like how you manage the services and the skeleton.

I don't know if it's possible, in order the keep the main script as general as possible, have the choice of the "mode" move to another loadable folder? So a user will just have to swap or rename that folder to use the mode he want. This can also open the possiblility to use a custom mode where the user could use a 3rd arduino for the head or use the raspi as a controller instead of an arduino. So the customization can be done there instead of playing in the main script and configs. The ear commands for the figerstarter or another mode can also be set there.

Just throwing an idea, don't know if it fit in your plan.

But so far it really look good

hairygael's picture

Ah funny Grog created another

Ah funny Grog created another repository called inmoov directly into Myrobotlab instead of the one I had created into pyrobotlab. Will need to transfer some stuff.


Ok, I moved some scripts in there and modified some minor things. Added and modifyed minimal script for to test the hand.

I tested the launcher after modifying the config to my needs and it went  Worky great for the voice! I then tried to change the voice from Marutts to NaturalReader to see how subconscious mouth would react, and it did exactly what it was supposed to do.

Alerted me to restart MRL because a new voice was downloaded.

The servo sliders were not starting although the Arduino right was connected, so I set MyRightArduinoIsConnected=1 and the sliders launched. Version 1851 definitly has some issues attaching/detaching, so it's not yet the correct version to test if all works fine.


moz4r's picture

Thank for feedback guys ! I

Thank for feedback guys ! I will work on next update to :

- Timer to check arduino state
- Acuracy arduino detection, Calamity How to check the mrlcomm version asked by mrl ( to compare if the content of arduino is ok )
- don't remove user settings when script is updated
- Skeleton additions ( I thing we can add "ear commands" in a section "vocal commands" off the skeleton part ? )

calamity's picture

Hi Anthony In the lastest

Hi Anthony

In the lastest  build you can get the mrlcomm version with arduino.getBoardInfo().version


moz4r's picture

servo attach/detach big question !

mrlcomm check worky tks !

Mmmm guys what do we do about automatic servo attach/detach.


autodetach will be possible  ( if setvelocity is used ) . So we can test it soon




moz4r's picture

version 0.0.5

Hi ! Some changelog :

- Add servo autodetach() very impressive !
- Add left hand to skeleton
- Add default config files so your config files are not deleted if script updated
- Neopixel integration ( thanks a lot @Calamity ) :

Neopixel in service folder with config file ( only rx/tx for now ) . There is some preprogrammed functions like flash in green while boot, red if errors, blue if robot downloading something

- inmoovLife folder : This is folder where we can put automations and timers inside
- inmoovCustom script : you can add your own commands inside
- Robot_Status_GlobalVars.py : here we declare some vars so we can read/write them everywhere , exemple in inmoovLife ( robotIsActualySpeaking ... )

One day it will be great to do some gui to control the config !

Is the work take a good direction or not ?

hairygael's picture

hello all, being away from my

hello all,

being away from my robots, I haven't tested the latest mofication you guys made on the InMoov repository .

I'm a bit concerned if the autodetach is necessary for the hand or arms because the robot needs to keep holding the pose if needed. The autodetach was implemented by Kwatters for the jaw and it was a very good solution to save the servo of burning.

if we remove ear.addcommands from the script, which Ithink is currently a bit confusing, we need to create a folder that contains the possible verbal commands, so users know what to say to trigger definite functions or gestures.

Regarding the neopixel service, I currently use it in various gestures which automatically triggers definite animations, will that be still possible? For exemple when it's start to search for faces(facetracking) it runs the Larsen scanner in deep green.

I lately also had the option to face recognize automaticaly the person and saying the name which would activate the user folder into programAB, do you think it will be still possible?

For the AIML and gestures we can copy all from my repository:



I modified some spelling errors WebkitSpeachReconitionFix to WebkitSpeechRecognitionFix and WebkitSpeechRecognitionOn.

Once home I will add the French AIML folder, I also have modified my gesture AIML to work for French language and English.

All together the progress is very good.!! I wonder if newbies will understand how all those folders work with each other... Sebastien proposed about month ago to create a very nice designed visual frontend gui for to set all the preferences.



Ash's picture

Hi All, The newbies don't

Hi All,

The newbies don't understand how all those folders work.... :-((

moz4r's picture

Hi ! thank for your feedback

Hi ! thank for your feedback @hairygael

Autoattach/detach :

if you want to disable/enable it , you can set in your .config of the skeleton part
autoDetach=1/0 so this functionality go out, or not.

@Calamity I think we need 2 functions Autoattach() and Autodetach(). Autoattach will be usefull everytime ,no need to speak "Attach X" if X is detached and it will cause errors if someone forget to attach.

Later, maybe we need to think of something, to use autodetach by default ,and noautodetach with specific gestures ?


Guys don't know what is the best way to go, this is the actual, lets talk about :

First, If I understand ear.addcommands are used today in your minimal scripts for people they didn't use chatbot, is it right ?
If it is yes it is good so there is no redondancies with chatbot
If no we need to think

FingerStarter : Commands are in the main script inmoov.py, no need to touch anything else ( just the com port and language in the config file )
Other minimal commands : Each ear.commands about skeleton part related are stored at the begening of the skeleton file. exemple

leftHand.py > "open your left hand"

You can find into : Inmoov_minimal.py , every ear.addcommands that call multiple sketon parts gestures
But a new specific ear folder will be a better solution. So it will be easy to translate all too !



Off course yes . a new py side function added : PlayNeopixelAnimation / StopNeopixelAnimation() it the same as neopixel.setAnimation() but cause no errors if someone didn't have neopixel. so you can put it everywhere you want. It will be the same thing if someone use 3 eyes / 4 legs ... Never saw 3 eyes inmoov but we need to be prepared :)

facerecognize / chatbot implementation

Todo list, after minimal base ?

spelling errors

If there is spelling errors , 100% shure it was me sorry :) thank for correction you made

newbies and all

If the base is ok, I think better documentation will be definitly usefull ?


GUI to configure

I say Wahooo and excited about this idea


mayaway's picture

Already the instructions have

Already the instructions have gone south. 
8/ You can setup your arduino port ,language,voice by editing BasicConfig.ini in InmoovScript folder

NOTE: THERE IS NO BasicConfig.ini so the Newbie is already baffled. It looks like Inmoov.config.default contains those settings but should it be edited? renamed? left alone? trial and error? send needless noworkies? 

mayaway's picture

Missing Checkup Routine

Within ../InMoovScript/Inmoov.py there is a reference to InitCheckup.py which does not exist in the current download. The Newbie is baffled again! Do we create this script? Is it pending? Does it get auto downloaded later? Do we need to worry about setting the value of RunningFolder and if so where is that done?

# health checkup & startup functions
moz4r's picture

Hi mayaway ! Thank you for

Hi mayaway ! Thank you for the feedback

InitCheckup.py is in the system folder. Any errors about this ?
BasicConfig.ini > Inmoov.config .

All the settings you need to mod are in the .config files . like main inmoov.config and exemple leftArm.config in skeleton

We will edit soon a real documentation when wip is over

moz4r's picture

Some synthesis about advancement

Hi all this the last work, lets talk about organisation. If you are ok we can go further, if not it will be adjusted.

From user side, no need to touch any .py , out of the box. All the setup is done into .config files.
Advance users or developpers can add there own commands into customscript and of course github to collaborate.
Out of the box the script is worky like a "Fingerstarter"
Folders description :

Main Folder
Inmoov.py > the main script ( very tiny, also called Fingerstarter )
Inmoov.config > Basic user configuration inside ( arduino com, language, voice/ear engine ... )

Inmoov can listen with the help of service ear "webkitspeechrecognition"

To interpret the recognized text you have the choice of 2 engines :
ear.addCommand > it's is hardcoded text, very EASY to use ear.commands and script actions
chatbot > Very powefull AIML engine ( not yet implemented ) . This is the engine of "Full Inmoov"

So, every ear.commands from Gael minimal script will be inside inmoovVocal\ear.addCommand
It is very easy to translate minimal scripts, all is in one place

TODO : need to carefully work about chatbot/ear.commands conflicts

Inside this folder you can find all preprogammed gestures of inmoov, like "Open the right hand" "Da Vinci"...
Those gesture are often associate with a ear.command from the previous folder. But not necessarie ( exemple if those gesture are called by a timer or a chatbot action )

You will find every Inmoov official skeleton parts ( leftHand, RightArm etc... ) .
Every part is optional and we can chose to launch one part or the whole robot
Every skeleton parts is associated with a .config file : to put individual parameters inside ( min/max servo value, default speed ... )
This is the folder you can put optional mods and extra servos inside, like bob's neck and many others

WIP idea about timers and automations

Never deleted file when script update. You can script your own inmoov inside it and take advantages of the basics already loaded.

We load inside this folder every optional (or not) MRL services used by the script ( exemple arduino/neopixel )

Core and init levels

I hope you understand the explainations
Have a nice day ! ( or night if you are on the other side of the planet )



moz4r's picture

Globalized ! [version 0.1]

All script is now globalized ( if you have a language pack ! )

Usefull about :
Startup commands ( starting ear ... )
Errors and subconcious mouth ( bad mrlcomm version inside arduno com7 ... )
ear.addCommand ( open your right arm ... )

What is a language pack ?

- A folder : languagePack
inside this folder every language pack are stored by the name of the language you have choosen inside inmoov.config ( en, fr, de ... )
Inside this folder you find .lang files
exemple Errors.lang

lang_BadMrlcommVersion="Bad M R L com version inside arduino , please update "
lang_ArduinoNotConnected="There is a problem ! with my communication port, check your arduino "

exemple ear.addCommand\minimal.lang

ear.addCommand("open your hands", "python", "handsopen")

There is 2 languagesPack at this time : French and English

moz4r's picture

add torso to skeleton

need to add rollneck mod and work on all gestures integration after that


moz4r's picture

- some news, we are

- some news, we are refactoring the french chatbot aiml files to enhance possibilities of awnsers by reflecting webkit recognition.It is hudge work, thank to french people that help. ( grammar correction, add accents,srai verification ... ) . Someone know maybe a tool to find orphans srai ?

- Gestures need to be refactored a little before integration, we publish soon 1 or 2 sample gestures to test. then all gestures will be refactored about speed control + servo attachements following this worky model for all.

-update script to 0.1.5
* optional globalized chatbot engine to test chatbot
* optional nervoboard service - ( to control up to 6 relays , 1 nervoboard : up to 3 power sources ). Usefull to detach undetacheable servo with a clasical servo.detach() . Or simply control robot power

moz4r's picture

About last commit, if you

About last commit, if you update the script :

Inside all config files,


was renamed to :


You need to manualy change it inside your own config files, because there is no automatic things at this time