The Robot with Two Brains REVISITED

A few weeks ago I was trying to determine how to pass data between services running on separate instances of MRL running on a Mac Mini and a Raspberry Pi.

The main problem I was trying to solve was I wanted Juniors voice to be processed by the Raspberry Pi in his head but I wanted to run ProgramAB on a bigger computer. There are several services most of us use that need to work in concert to perform listening and speaking. Currently I am running a USB microphone which seemed to be easier to do on the Mac Mini rather than the RaspberryPi. Also, I wanted to divide up services so that there was simple text that was being passed between machines. 

I ended up with ProgramAB, Webkitspeechrecognition and htmlfilter on the "Main" computer (Mac Mini) and then MarySpeech on the RaspPi "Secondary" computer.  Inside MyRobotLab we have access to a great service for passing messages, MQTT. With MQTT you can set up a computer to subscribe to a topic and then interact with a message when it is sent on this topic. What makes this even better is that you do not have to set up a boot order to make sure that what you intend for receiving messages is online prior to the device sending data.  

In the code I have separated configuration of MQTT into a separate script, mqttPubSubConfig.py  and then I call it in my main example script mqttExampleRemoteMouth.py.  Then running on the RaspberryPi I am also calling the mqttPubSubConfig.py from inside my other script mqttExampleRemoteBrain.py. Here is what the code looks like:


#file : home/kyleclinton/mqttPubSubConfig.py edit raw
#########################################
# mqttPubSubConfig.py
#
# by Kyle Clinton
#########################################
###
# I am running Mosquitto on my main computer
# I know Mosquitto is available for Mac and Linux, 
# but I am sure it is also available or Windows too
###
from java.lang import String
python = Runtime.getService("python")

topicHearing = "myrobotlab/hearing"
topicSpeaking = "myrobotlab/speaking"
qos = 0 # At most once (0), At least once (1), Exactly once (2).
##Running Mosquitto on the same device that is running the "main" scripts
## broker on other machines will be the IP of this device on the network!
broker = "tcp://127.0.0.1:1883"
 
clientID = "MqttMainController"
mqtt = Runtime.start("mqttHearing", "Mqtt")
python = Runtime.start("python", "Mqtt")
 
print mqtt.getDescription()
 
mqtt.setBroker(broker)
mqtt.setQos(qos)
mqtt.setPubTopic(topicSpeaking)
mqtt.setClientId(clientID)
mqtt.connect(broker)
mqttHearing.subscribe(topicHearing, 0)

###For Testing
mqtt.publish("hello myrobotlab world")

python.subscribe("mqtt", "publishMqttMsgString")


#file : home/kyleclinton/mqttExampleRemoteMouth.py edit raw
from java.lang import String
python = Runtime.getService("python")


#Add MQTT!
execfile("../Git/py_scripts/mqttPubSubConfig.py")

#Add Sight!
execfile("../Git/py_scripts/junior_sight.py")

# create a ProgramAB service and start a session
junior = Runtime.createAndStart("junior", "ProgramAB")
junior.startSession("ProgramAB", "default", "junior")

######################################################################
# create the speech recognition service
# Speech recognition is based on WebSpeechToolkit API
######################################################################
# Start the new WebGuiREST API for MRL
webgui = Runtime.createAndStart("webgui","WebGui")

######################################################################
# Create the webkit speech recognition gui
# This service works in Google Chrome only with the WebGui
######################################################################
wksr = Runtime.createAndStart("webkitspeechrecognition", "WebkitSpeechRecognition")
######################################################################
# create the html filter to filter the output of program ab
# this service will strip out any html markup and return only the text
# from the output of ProgramAB
######################################################################
htmlfilter = Runtime.createAndStart("htmlfilter", "HtmlFilter")
# add a link between the webkit speech to publish text to ProgramAB
wksr.addTextListener(junior)

junior.addListener("publishText","python","onTextResponse")
 
#  MQTT call-back
# publishMqttMsgString --> onMqttMsgString(msg)
def onMqttMsgString(msg):
  # print "message : ", msg
  junior.getResponse(msg[0])
  print "message : ",msg[0]
  print "topic : ",msg[1]

def onTextResponse(text):
  mqtt.publish(text)
  print "sending : ", text


#file : home/kyleclinton/mqttExampleRemoteBrain.py edit raw
from java.lang import String
from time import sleep
pi = Runtime.createAndStart("pi","RasPi")

#Load Pub/Sub Service (MQTT)
execfile("../py_scripts/mqttPubSubConfig.py")


# Add in controller for head, neck and antenna servos SHOULD be using i2c 16 servo controller
#Load Juniors mouth!
execfile("../py_scripts/juniors_voice.py")

#Load Juniors Eyes!
execfile("../py_scripts/juniors_eyes_4.py")

#####for testing
mouth.speakBlocking("Testing 1, 2, 3")


drawEyes()
sleep(2)
drawClosedEyes()
sleep(1)
drawEyes()

mqtt.subscribe("myrobotlab/speaking", 0)
#mqtt.publish("hello myrobotlab world")
python.subscribe("mqtt", "publishMqttMsgString")
# or mqtt.addListener("publishMqttMsgString", "python")
 
#  MQTT call-back
# publishMqttMsgString --> onMqttMsgString(msg)
def onMqttMsgString(msg):
  # print "message : ", msg
  mouth.speakBlocking(msg[0])
  print "message : ",msg[0]
  print "topic : ",msg[1]



mqtt.publish("What is your name?")

 

 

 

 

 


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
GroG's picture

Great post Kyle, excellent

Great post Kyle,

excellent video too...

The demo and explanation with cause & effect, even when you weren't talking to junior ... he does like to interject ;)

You committed the files correctly into github and I made a pull request from "develop" branch to "master" and merged it, however, mqttExampleRemoteBrain.py does not exist.  Additionally, the references to the code samples did not work because they had a bunch of formatting in them ...  

Below is in the original form (looking at the source):

<p><span style="color: rgb(51, 102, 0); font-family: 'lucida grande', tahoma, verdana, arial, sans-serif;">[[home/kyleclinton/mqttPubSubConfig.py</span>]]</p>
<p><span style="color: rgb(51, 102, 0); font-family: 'lucida grande', tahoma, verdana, arial, sans-serif;">[[home/</span><span style="color: rgb(51, 102, 0); font-family: &quot;lucida grande&quot;, tahoma, verdana, arial, sans-serif;">kyleclinton</span><span style="color: rgb(51, 102, 0); font-family: 'lucida grande', tahoma, verdana, arial, sans-serif;">/mqttExampleRemoteMouth.py]]</span></p>
<p>[[<span style="color: rgb(51, 102, 0); font-family: 'lucida grande', tahoma, verdana, arial, sans-serif;">home/kyleclinton/mqttExampleRemoteBrain.py]]</span></p>
 
yup, all that muck, when you just want
#file : home/kyleclinton/mqttExampleRemoteBrain.py edit raw
from java.lang import String
from time import sleep
pi = Runtime.createAndStart("pi","RasPi")

#Load Pub/Sub Service (MQTT)
execfile("../py_scripts/mqttPubSubConfig.py")


# Add in controller for head, neck and antenna servos SHOULD be using i2c 16 servo controller
#Load Juniors mouth!
execfile("../py_scripts/juniors_voice.py")

#Load Juniors Eyes!
execfile("../py_scripts/juniors_eyes_4.py")

#####for testing
mouth.speakBlocking("Testing 1, 2, 3")


drawEyes()
sleep(2)
drawClosedEyes()
sleep(1)
drawEyes()

mqtt.subscribe("myrobotlab/speaking", 0)
#mqtt.publish("hello myrobotlab world")
python.subscribe("mqtt", "publishMqttMsgString")
# or mqtt.addListener("publishMqttMsgString", "python")
 
#  MQTT call-back
# publishMqttMsgString --> onMqttMsgString(msg)
def onMqttMsgString(msg):
  # print "message : ", msg
  mouth.speakBlocking(msg[0])
  print "message : ",msg[0]
  print "topic : ",msg[1]



mqtt.publish("What is your name?")
 
The links to github must be without formatting, otherwise the parser chokes on it.
There are two buttons above which can help.  One is the Source button where you can view the source of your post, the other is "Remove Formatting" - where you can highlight a section of the post and remove/clean all formatting.
 
I agree, distributed computing is a "good thing"
Kakadu31's picture

This is really nice Kyle, I

This is really nice Kyle, I like the design of Junior, what "display" are you using as mouth and how do you control it together with the sound output? I maybe try to do it similar on my inmoov instead of jaw mouvement.

 

Keep the work up!

kyle.clinton's picture

Junior's Mouth

I am really proud of Juniors mouth display. It is based off of the signal being sent to the speaker. I know I have a video of it working but probably not one that describes the wiring of it. I will see if I have some notes on the wiring of that and post it as a reply or create a blog spefically on Junior's mouth.

Thanks!

AutonomicPerfectionist's picture

Nice!

Hey Kyle, very impressive work here! I've seen some other robotics gurus do similar things with the speaker signal interception (James Bruton with his Ultron robot), but from what I've heard it's a very involved process. I'm currently working on an AudioSync service that plugs directly into Java's sound API, which would allow you to just subscribe to it and send the intercepted volume levels to whatever you want. It would only respond to sound from MyRobotLab, so it wouldn't attempt to sync to system audio. Unforutnately, it's taking longer than I expected (curse the Java sound API's complexity!), so for now this is the best way to sync audio with something else.

Very impressed with the networking feature with MQTT. I actually kinda forgot that service existed... Can't wait to see what else you can do with two brains!