To be updated by AdolphSmith
Next version 'Nixie' is coming soon !
I have this piece of code and expected that my "input" function would get called
I do not know whether this is thought to trigger with every frame captured?
Anyway, in my case "input" does not get called. What is wrong with my code (tried it with 2241 and 2275)?
Use an offline speech recognition like Cortana or Microsoft speech recognition aand some od microsoft vision and emotion api the use microsoft cloud
Hello all!
I have recently been looking into some TTS and speech recognition interfaces to borg into MRL. I may have just hit the jackpot!
I found two realistic-sounding TTS engines:
ResponsiveVoice.js and iSpeech
Both, unfortunately, require an internet connection but both are free. Again, unfortunately, neither are open source. ResponsiveVoice is written in JavaScript and iSpeech has an SDK for Java. iSpeech requires an API key while ResponsiveVoice does not. Both support a high number of voices and are both high quality.
Hi,
Is it possible to have in the same data package, the coordinates X and Y of face and the person name reconizer ?
The aim is to make the FaceTracking and reconizer in the same time.
Hi, it is possible to make face reconized with openCV service (i think yes) ?
Do you have an example ?
Thank you.
Ahoy !
We recently were excited to add Java 8 default methods to interfaces for our services ...
Sadly, this borked Jython :(
But my last checkin should have fixed all that. There are now no more "default" methods in any Java org.myrobotlab.service interface.
In the process I tried to do some refactoring. I created a new interface ... "Attachable" ! And moved interfaces which were common to ALL services to org.myrobotlab.framework.interfaces
Hardware: One arduino mega, Activator, servo HV2060MG powered with 7.2V
Software: Last MRL, not InMoov services.