I use xrdp to connect to the PI. I will try to shut down some services to save resources. But at this stage I reinstall MRL frequently, and it's easier to do using the browser.
I'm using the new version of MouthControl / AcapelaSpeech that was recently changed. It works much better now than before that changes to the events, but it's still not perfect. Perhaps the voice I'm using is speaking at a different speed than the default.
He doesen't have a name yet, so he feels a little like a nobody....
If I ask him, he says 'My name is Alice 2.0' :-) I need to come up with a better name when he gets his body.
Excellent demo Mats! I noticed your have a different eye mechanism that I do. It looks much more stable than the one I have currently.
It looks like the speech recognition was working pretty well for you. Hopefully we can integrate the face recognition, so you can specify the user name of who is speaking with the chat bot. ProgramAB can maintain different sets of facts about people such as their name, age, etc in the predicated file. It would be a very cool demo to walk up to the InMoov and have it recognize who you are. Say, "what is my name?" and have the robot/programab tell you who it thinks you are...
I redesigned the eye mechanism and the internals of the head. All the modifications that I have done are available on Thingiverse.
I really like your suggestion on connecting Program AB with WebKitSpeechRecognition and FaceRecognition so that it will switch client session either when you say, My name is ... or when the face recognition sees your face. I think all the bits and pieces are there already to be able to do the first part. I just have to understand a little more about the differnt AIML files and the oob tag.
I would also like to be able to address the robot, so that it only responds to sentences that it's supposed to react to. Something like "Robot, what is my name."
I think that is more natural than having to confirm every message.
I'm watching your progress with facerecognition with great interest. It looks very good. Keep p the good work and thanks for making it possible.
Nice Mats. Impressive the
Nice Mats.
Impressive the Raspi can run all that ... MRL on a JVM, a Xwindows server, & Remote Desktop server (vncserver or xrdp?)
To conserver some resources have you thought about not using the remote desktop server, nor Xwindows - e.g. run level 2 ?
MouthControl seems a bit out of sync too ?
What's his name ?
I use xrdp to connect to the
I use xrdp to connect to the PI. I will try to shut down some services to save resources. But at this stage I reinstall MRL frequently, and it's easier to do using the browser.
I'm using the new version of MouthControl / AcapelaSpeech that was recently changed. It works much better now than before that changes to the events, but it's still not perfect. Perhaps the voice I'm using is speaking at a different speed than the default.
He doesen't have a name yet, so he feels a little like a nobody....
If I ask him, he says 'My name is Alice 2.0' :-) I need to come up with a better name when he gets his body.
Nice end-to-end demo!
Excellent demo Mats! I noticed your have a different eye mechanism that I do. It looks much more stable than the one I have currently.
It looks like the speech recognition was working pretty well for you. Hopefully we can integrate the face recognition, so you can specify the user name of who is speaking with the chat bot. ProgramAB can maintain different sets of facts about people such as their name, age, etc in the predicated file. It would be a very cool demo to walk up to the InMoov and have it recognize who you are. Say, "what is my name?" and have the robot/programab tell you who it thinks you are...
Great suggestion
HI Kevin
I redesigned the eye mechanism and the internals of the head. All the modifications that I have done are available on Thingiverse.
I really like your suggestion on connecting Program AB with WebKitSpeechRecognition and FaceRecognition so that it will switch client session either when you say, My name is ... or when the face recognition sees your face. I think all the bits and pieces are there already to be able to do the first part. I just have to understand a little more about the differnt AIML files and the oob tag.
I would also like to be able to address the robot, so that it only responds to sentences that it's supposed to react to. Something like "Robot, what is my name."
I think that is more natural than having to confirm every message.
I'm watching your progress with facerecognition with great interest. It looks very good. Keep p the good work and thanks for making it possible.
/Mats
Speech Recognition Keyword
I like the idea of having a prefix that must be matched in order for webkit speech to publish the recognized text.
so you would always have to say "robot" (or some other key phrase) followed by the text you want it to recognize.
The remaining part of the recognized text could be published (excluding the keyword) ...
So, I think that could be added to the WebkitSpeechRecognition service...
really nice Mats!
really nice Mats!
Great Mats - looks a bit
Great Mats - looks a bit scary when he looks with both eyes at the nose!