A few weeks ago I was trying to determine how to pass data between services running on separate instances of MRL running on a Mac Mini and a Raspberry Pi.

The main problem I was trying to solve was I wanted Juniors voice to be processed by the Raspberry Pi in his head but I wanted to run ProgramAB on a bigger computer. There are several services most of us use that need to work in concert to perform listening and speaking. Currently I am running a USB microphone which seemed to be easier to do on the Mac Mini rather than the RaspberryPi. Also, I wanted to divide up services so that there was simple text that was being passed between machines. 

I ended up with ProgramAB, Webkitspeechrecognition and htmlfilter on the "Main" computer (Mac Mini) and then MarySpeech on the RaspPi "Secondary" computer.  Inside MyRobotLab we have access to a great service for passing messages, MQTT. With MQTT you can set up a computer to subscribe to a topic and then interact with a message when it is sent on this topic. What makes this even better is that you do not have to set up a boot order to make sure that what you intend for receiving messages is online prior to the device sending data.  

In the code I have separated configuration of MQTT into a separate script, mqttPubSubConfig.py  and then I call it in my main example script mqttExampleRemoteMouth.py.  Then running on the RaspberryPi I am also calling the mqttPubSubConfig.py from inside my other script mqttExampleRemoteBrain.py. Here is what the code looks like:

[[home/kyleclinton/mqttPubSubConfig.py]]

[[home/kyleclinton/mqttExampleRemoteMouth.py]]

[[home/kyleclinton/mqttExampleRemoteBrain.py]]

 

 

 

 

 

GroG

7 years ago

Great post Kyle,

excellent video too...

The demo and explanation with cause & effect, even when you weren't talking to junior ... he does like to interject ;)

You committed the files correctly into github and I made a pull request from "develop" branch to "master" and merged it, however, mqttExampleRemoteBrain.py does not exist.  Additionally, the references to the code samples did not work because they had a bunch of formatting in them ...  

Below is in the original form (looking at the source):

<p><span style="color: rgb(51, 102, 0); font-family: 'lucida grande', tahoma, verdana, arial, sans-serif;">[[home/kyleclinton/mqttPubSubConfig.py</span>]]</p>
<p><span style="color: rgb(51, 102, 0); font-family: 'lucida grande', tahoma, verdana, arial, sans-serif;">[[home/</span><span style="color: rgb(51, 102, 0); font-family: &quot;lucida grande&quot;, tahoma, verdana, arial, sans-serif;">kyleclinton</span><span style="color: rgb(51, 102, 0); font-family: 'lucida grande', tahoma, verdana, arial, sans-serif;">/mqttExampleRemoteMouth.py]]</span></p>
<p>[[<span style="color: rgb(51, 102, 0); font-family: 'lucida grande', tahoma, verdana, arial, sans-serif;">home/kyleclinton/mqttExampleRemoteBrain.py]]</span></p>
 
yup, all that muck, when you just want [[home/kyleclinton/mqttExampleRemoteBrain.py]]
 
The links to github must be without formatting, otherwise the parser chokes on it.
There are two buttons above which can help.  One is the Source button where you can view the source of your post, the other is "Remove Formatting" - where you can highlight a section of the post and remove/clean all formatting.
 
I agree, distributed computing is a "good thing"

This is really nice Kyle, I like the design of Junior, what "display" are you using as mouth and how do you control it together with the sound output? I maybe try to do it similar on my inmoov instead of jaw mouvement.

 

Keep the work up!

I am really proud of Juniors mouth display. It is based off of the signal being sent to the speaker. I know I have a video of it working but probably not one that describes the wiring of it. I will see if I have some notes on the wiring of that and post it as a reply or create a blog spefically on Junior's mouth.

Thanks!

AutonomicPerfe…

7 years ago

In reply to by kyle.clinton

Hey Kyle, very impressive work here! I've seen some other robotics gurus do similar things with the speaker signal interception (James Bruton with his Ultron robot), but from what I've heard it's a very involved process. I'm currently working on an AudioSync service that plugs directly into Java's sound API, which would allow you to just subscribe to it and send the intercepted volume levels to whatever you want. It would only respond to sound from MyRobotLab, so it wouldn't attempt to sync to system audio. Unforutnately, it's taking longer than I expected (curse the Java sound API's complexity!), so for now this is the best way to sync audio with something else.

Very impressed with the networking feature with MQTT. I actually kinda forgot that service existed... Can't wait to see what else you can do with two brains!