Got the same problem with speech recognition as mentioned in another topic. Mic status is always changing between ready and not ready. After a few seconds MRL shows an error: Too many events.
I would like to go around speech recognition and control my robot with written commands. How I can do this?
Do you have more than one tab
Do you have more than one tab open ?
No. I use chromium only for
No. I use chromium only for the webgui of MRL. My standard browser is firefox I know that chromium should be standard but chromium can not configured as standard. So I entered the address to the webgui by hand.
It's all too complicated and not the first aim I'd like to handle with. I want to test my robot functions, and this best with written commands.
You can use Python to test,
You can use Python to test, but exactly what do you want to test ?
I'd like to enter gestures
I'd like to enter gestures like "openhand" or "raisefinger" which in my opinion exist as python script. What I need to know is how I can start these gesture scripts.
What version of myrobotlab
What version of myrobotlab are you using?
As a first try I installed
As a first try I installed manticore. Now I installed in a separate folder nixie with inmoov2, just to see if things go better.
With manticore I get the discribed situation where the speech recognition is not working.
In nixie I get a webgui frame with a side panel with the services, and for same services, etc. a finger, the graphic of the servo adjustment is shown. But I cannot see any reaction for speech commands.
Hello, You can send each
Hello,
You can send each gesture as a written command via the python UI.
The list of available python gestures is in the directory InMoov/gestures for Manticore and for Nixie in resource/InMoov2/gestures
Take the def within the script and execute it in the python UI.
For example with the gesture handsclose.py
def handsclose():
i01.moveHand("left",180,180,180,180,180)
i01.moveHand("right",180,180,180,180,180)
You would execute in python UI:
handsclose()
If you use the InMoov2 UI in Nixie, you can also execute all the gesture via this option which contains a drop down list with all available gestures.
Thank you Gael for the
Thank you Gael for the interesting information. I tried to enter the command in the untitled.py and then execute it with the according button. I always get a traceback that the method is not defined in python core. This is manticore. Nixie I did not try it yet.
Hello, This means the
Hello,
This means the gestures are not properly loaded when you launch the START_INMOOV.bat.
Have you been able to test the servos via the slider's Gui? Does it move?
Maybe you should try to re-install a fresh version again, maybe something didn't install correctly.
Thank you for your response.
Thank you for your response. I did a reinstall already serveral times. I have the myrobot.log from directly after the install and the log from after configuring the rigtht sight and starting the gesture python script at hand. Is there a possibility to attach or upoad the files, or can I extract details? What would be the search condition?
The start_inmmov.sh has to modified after install, because there are wrong line endings for linux shell. I did this with a text editor, which can switch from windows line ends to linux.
The slider gui works correctly. It does move my finger starter. Only the gesture script makes problems.
The gesture is now doing what
The gesture is now doing what I want. I found my mistake. Thank you for support.