Cool ideas of things to add
I haven't used MRL in about a year. I just downloaded the latest version and it looks preety nice. You guys have done a good job in this release.
Cool ideas of things to add
I haven't used MRL in about a year. I just downloaded the latest version and it looks preety nice. You guys have done a good job in this release.
Hey everyone!
Just browsing through the MagPi magazine and I found this.
sorry my language
i used my arduino one to move two servers for inmoov eyes i can not find a working script !!( python )
can you help me?
This is one video to see manipulator is working. You can see proximity sensor to take something.
Dom.
As i have been working on my inmoov, i was watching the videos people have posted about the oculus and controlling the head. i was wondering if anyone has intigrated the oculus touch for hand and arm controls.
Hello everybody!
While the MyRobotLab is very versatile with the types of controllers it supports, the InMoov service is not.
It should be possible to add support into the skelletal sections to allow for differing controller configurations beyond the standard arangments.
For example:
One user may be using an arduino with I2C controllers in each arm.
Another user may be running a Raspberry Pi using the I2C bus the run servo controllers though out the robot.
Use an offline speech recognition like Cortana or Microsoft speech recognition aand some od microsoft vision and emotion api the use microsoft cloud
Hello all!
I have recently been looking into some TTS and speech recognition interfaces to borg into MRL. I may have just hit the jackpot!
I found two realistic-sounding TTS engines:
ResponsiveVoice.js and iSpeech
Both, unfortunately, require an internet connection but both are free. Again, unfortunately, neither are open source. ResponsiveVoice is written in JavaScript and iSpeech has an SDK for Java. iSpeech requires an API key while ResponsiveVoice does not. Both support a high number of voices and are both high quality.