This is a preliminary test of the upcoming Google Assistant service, created using mrlpy.

This service is most definitely NOT FINISHED! There are many bugs in the current implementation. The code is on a fork at my github page, with a pull request in progress to merge into the develop branch.

Features

This service is completely compatible with most, if not all, Google Assistant features. Once authenticated, it can be used for smart home control, information gathering, games, timers / alarms, and everything else Google Home can do. Theoretically, it is possible to control MRL through any other Google Assistant by using Actions on Google (not tested). This service provides wake-word detection ("Ok Google"), speech synthesis, and speech to text via the Google Assistant gRPC API. It is possible, but difficult, to use this service as a Speech to Text service (like WebkitSpeechRecognition or Sphinx). To do so would require some more updates to mrlpy and this service to parse the output of the Google Assistant library without speaking the response (send the response to a file, or /dev/null) and to start a conversation without using the wakeword. This service is also capable of reading a request from an audio file and writing the response to another audio file (not yet implemented, but easy to do).

Bugs

There are many bugs associated with the current implementation as of 8/16/17. Here they are:

1. Authentication must be done manually by jumping into the virtualenv and running the one-time authentication script with the path of the included client secret file as the argument (Google setting up Google Assistant SDK and skip past getting a client secret)

2. Service fails silently. If an error occurs during the initialization process, the user is not made aware. This is because of the mechanism for starting the child process. After the child process and MRL are synchronized, then it is possible to pass errors (not yet implemented).

3. This service does NOT normally show up in the GUI, due to a bug in generating serviceData.json (does not directly inherit from Service, instead inherits from PythonProxy which then inherits from Service). Can be temporarily fixed locally by manually editing serviceData.json (which I did for the purpose of demonstration)

4. Requires two external programs: xterm (will be removed soon) and Alsa (for audio input and output). Both are installed by default on an Ubuntu system. Everything else is included in the dependency zip file.

5. Unknown if dependency extracts correctly. It should extract similarly to the InMoov script, where it places a folder called native-mrl-services in the root MRL directory. I haven't been able to test this yet.

6. Startup is extremely slow, because the start command string first checks for updates to the virtualenv, then starts the synchronization process, before finally starting the Google Assistant library.

7. Audio input sometimes fails. This is a bug in the Google Assistant library, and only happens occasionally.

GroG

6 years 7 months ago

Awesome AutonomicPerfectionist (please change your avatar - you are not a scared gopher :),

I'm excited to see your enthusiasm and work.  Excellent demo and write-up. 

I see your using WebGui as a gateway for msg delivery. Very cool.  I'm currently working on a very large refactor of mrl where messaging becomes much more general and extensible.  You won't need an "exec" to send the message, nor any other utility or helper functions.  Routing & delivery will auto-magically be handled by details of the "name" in the message

The soon be format will be :

       {id}-{name}   - where {id} is the process/instance id of an mrl instance and {name} is the service name

If I had an instance running in the cloud named "chicago"  and a service on it called "bob", my local running instance would just

send("chicago-bob", "method", param0, param1, ...)

the message "may" go through webgui, or remoteadapter, xmpp, or someother gateway service - or worst case senario, timeout ...  but you as a developer only need to care about the name 

The demo was great, keep up the work !  Excited to see upcoming capabilities.

I'm curios :
Are you a python devloper ?  & Are you currently working on building an InMoov ?

Alright, changed my avatar!

Here's where it comes from:

The trapezoids symbolize structure and support. There being three symbolize the Three Laws of Robotics. Finally, the arrangement symbolizes support through unity (I like symbols :b)

Now, the new message system looks great! Excited to see it! One question though: is it possible to require a message be sent through a particular gateway (specifically webgui)? I'm not sure if mrlpy would be able to receive messages through this new system without requiring a webgui, but then again I haven't seen it or its capabilities yet.

Fiinally, answers to your questions:

1. I'm not actually a professional developer at all (just started my senior year of high school this week). I am more of a Java programmer though. I actually was first exposed to Python through MRL. Once I started switching my computers over to Linux (Windows is too slow on an Acer Aspire One ;b), I saw Python more and more so I figured it would be beneficial to start learning some. Mrlpy is my first project in Python.

2. Yes, I am building my own InMoov (his name's Hugo). Got most of the right hand and forearm completed, and the front and back torso plates assembled. I'm trying a new approach to the chest though: transluscent white with backlights for status, emotions, etc. Probably a neopixel ring in the center with neopixel stems branching off and following the contours of the chest

.

(Ignore the paint job, still working on that)

Great avatar !

I like your choice.

Messages would flow throw gateways depending on a dynamic internal routing table.  
Instances of mrl directly connected to one another already know which gateway to take, because typically when you connect 2 instances together they will share that info.

e.g.  webguigateway.connect("192.168.0.7") -> leads to exchange ("hi this is houston") <--> ("hi this is chicago") ...  so to find "chicago-bob", the system knows to first send messages to the newly connected "chicago" instance through the webguigateway.   But if we are connected and there was no exchange, or our message did not get delivered we do a broadcast - similar to ARP .  If we still can't find our destination then it is routed over the "default" gateway.   For example, we want to send a message to "hawaii-bob", but "hawaii" is not directly connected to us, we only have one gateway (the webguigateway) so therefore, we hope "hawaii-bob" will get the message if we send it "through" "chicago"