Hello all!

Over the past week, I've been working on a little addition to MRL's service capabilities, I wanted to create a Google Assistant service, but the API is in Python (a Java imlementation exists, but it is hard to follow and has far fewer features compared with the Python example). So, to remedy these, I built a Native Python API, called mrlpy. This API currently supports registering as a service, process synchronization, a compatibility layer for scripts written for the Jython interpreter, and proxy service generation.

The API can be downloaded using Pip: 

$ sudo pip install mrlpy

The source code and API documentation can be found on my github page

mrlpy will be updated shortly to support full compatibility and full mesage passing. The companion service (PythonProxy) will be committed to the development branch soon (just a little refactoring left). The WebGui service will have to be updated so that PythonProxy can access the list of clients and determine which one is the native python service.

After finishing up these problems, I will start work on a Google Assistant service and possibly a ROS bridge.

P.S.

I noticed that recent committs show that user grperry checked in a blank Proxy class in the develop branch. Perhaps we could combine our efforts?

GroG

7 years 5 months ago

Wow .. I read this and it was a really enjoyable surprise.  I'm very impressed AutonomicPerfectionist.  I began reading your post and thought, "cool, another person has worked on the MRL API and found some use out of it" ..  however, the more I read the more excited I got.

"The API currently supports registering as a service, process synchronization, .... "

You had me at registering as a service ;)

"mrlpy will be updated shortly to support full compatibility and full mesage passing"

Excellent !
Google Assistant ?  ROS bridge ?  Awesome.

The WebGui service will have to be updated so that PythonProxy can access the list of clients and determine which one is the native python service.

I'd like to know in what way .. I have a large commit coming which will "add to" the existing API perhaps what you need is in the future commit, depends...  I'd need more info, perhaps the Proxy service can suffice .. 

Ya, I'm grperry when I'm too lazy to change my .gitconfig global config :)

WebGui and the processing of the MRL API has been refactored substantially.  The API itself hasn't change "much".  It has a few changes which are important, but you might be able to disregard them.  I'd have to see how you start communication.

The purpose of the refactoring was to support something currently called WebGuiClient service.  This currently is a websocket client which can attach from one MRL instance to another instance running WebGui.  It would allow distributed messaging over multiple instances of MRL.  

WebGuiClient is a java implementation of a client which sounds exactly like what your mrlpy does.  I have been working on a java websocket client, but it was my full intention that the changes I have done were enough to fully support clients from any language ... e.g. JavaScript (which is our Angular display), Python, Go, etc ....

The "Proxy" service is what you probably already guessed - a "representitive of a Service in the registry which is "not" Java.   It is a placeholder for a remote service and a message & integration point for all the services currently running in Java MRL. 

I'll try to go over the code you have checked into github, and I am excited to see your project.

 

Hey, thanks GroG!

Yeah, the WebGuiClient service is probably what I need. I've written a workaround that just uses webgui.broadcast() to forward messages, but that's inefficient and can also cause problems where there is more than one service with the same name. I've tried to check in my code, but I don't seem to have access. I've instead pushed it to a fork on my github page for the time being.

Thanks again!

The WebGuiClient is what you want to do .. but in Python.
The WebGuiClient is a consumer of the WebGui's API - but over JSON serialization, which Python, Javascript, etc.. can use.  

You won't "need" the WebGuiClient - You'll probably want to follow what its doing - and do the same thing in your WebGuiClient.py  (which would be conceptually the same) .. I'm pretty sure you already have something similar to it - but probably named something else.

I did a very cursory look at mrlpy.  I can tell your Python-Fu is very great ...  my Python-Fu is very weak in comparison.  But I'm very excited by what you are doing.

Instead of a PythonProxy, I think perhaps it migh be best to have a generalized Proxy for "all" languages which do not completely serialize into the current language running in the current process.  

For this, the WebGui must intercept the "register" request - check to see if it's natively serializable - if not create a Proxy - then load the proxy with the incoming data and continue with the runtime registration.

In addition, there should be a more structured communication setup - even before registration.  That's the reason I created a Hello object.  Within the Hello object contains platform info, but with this discussion I can see that its too "Java-centric" and I'll start changing it to be more universal.  The Hello object contains the local Platform - which contains MrlVersion, and other details.

I'd like to get a post or example script of how to use mrlpy.  I'm very interested in having Mrl language agnostic, since the real power of Mrl is communication & messaging vs specific implementations.

I know this is a bit of a ramble, but its morning & pre coffee ... I'll have code, docs and diagrams which will make more sense .. but all this takes time.. :)

 

Thanks for the ideas. I think I know what to change, but unfortunately I can't work on it today (landscaping with my dad in 100 degree heat, fun), so I'll start making changes tomorrow. After that, though, I have to start studying for my ACT test on Saturday so it'll be a little bit before I can finish making the changes. Ill take a look at WebGuiClient and figure out how to translate it tonight. I'll also quickly spin up an example script tonight too and put it on pyrobotlab.
Thanks again!

Ya ... sounds like fun :)

A quick disclamer ...  WebGuiClient doesn't have much to copy at the moment .. still working on it,  
but it "will" be the one to copy from - as it will :

  • create a ws connection with webgui
  • use the message api
  • say "Hello" with platform information
  • register itself
  • use the pub/sub 
  • use the Proxy service
  • use blocking api

in process of making that stuff now

Just a heads-up, GroG, but the current WebGui is having problems with POST requests. For whatever reason, atmosphere is labeling the request's content-type as text/plain, and request.body().asString() returns null. Using Lamiak's WebGui works perfectly with no changes to mrlpy. I've looked at the headers before sending and they are correct, but atmosphere is still having problems. I've temporily fixed it in my local copy by transplanting Lamiak's WebGui over and modifying it just a tiny bit so that it would compile (changing createMessage() calls to Message.createMessage(), adding null NameTypeProvider as first argument to createMessage() calls).

As for mrlpy's examples, they are up in the examples folder on the github page. They have been excluded from PyPi so they will not be installed with the rest of mrlpy.

Thanks AP,

Not totally suprised considering the messaging system pretty much got ripped out and now all of the messaging api is getting refactored and put in modular classes outside of Services.  Sorry for the disruption, the pieces should be back together shortly.

Messages needed to know how to create themselves, without Services.

I'm curious, does the webgui work for you currently ?  If not I should check-in more refactoring, because I test the WebGui angular client pretty regularly - and locally its workindg for me.

Thanks for the demo scripts and information for mrlpy !   I think it was really fortunate timing for us both to be working on this but from opposite sides at the same time.  Its like two teams making a tunnel and going about it blind for weeks and meeting one another...  :)

Hey, sorry for not responding quickly (suddenly got really busy around the house).

Unfortunately, my webgui is not working with the latest commit. From what I see, it looks as if the body of the request is consumed elsewhere and does not get to the handle method of webgui. Also, the service API is broken for params encoded in a POST body. I've been working on that recently. It looks as if synchronous returns is not yet supported in the current commit.

This is the segment of the log that seems to show there is a problem:

00:03:23.395 [New I/O worker #6] INFO  class org.myrobotlab.service.WebGui - GET /api/messages
00:03:23.424 [New I/O worker #6] ERROR class org.myrobotlab.service.WebGui - msg is null null
 

Unfortunately, I will be out of town for a week starting tomorrow so I won't be able to add on or anything. Just a heads up, the proxy service is still called PythonProxy because I don't want to overwrite the Proxy class just yet and it currently supports executing native services, but only through the python command currently.

Thanks, GroG for the help! I will be back Tuesday and I will continue working on this project then.

 

P.S. Still having access problems, so code is pushed to my fork

juerg

7 years 5 months ago

in what way will mrlpy be different from the python service we already have in mrl?

Hello juerg

Python is an odd language that allows writing extensions to the interpreter, normally written in C. Some libraries, such as lxml, require these extensions in order to function.

In MRL, the Python service uses a Python interpreter written in Java called Jython. Jython supports all pure python ("Pythonic") code, but, for reasons of portability and simplicity, is incapable of loading Python modules that were written as C extensions. Instead, Jython allows importing Java classes as modules, but it still leaves a compatibility hole. Therefore, certain Python libraries will not work in Jython and can't be used in MRL. mrlpy is a library that runs in the standard CPython interpreter (default interpreter that is installed when installing Python), and therefore is fully compatible with Python modules that were written in C or in native code. Using mrlpy does make MRL more platform-dependent, but also opens the door for many more services to be written that take advantage of those native / C libraries.

CPython and therefore mrlpy are supported on all major platforms, but certain Python libraries, since they are written in C, will not work on certain platforms.

Mats

7 years 5 months ago

I started to look at ROS and found that there is a Java implementation of ROS called RosJava.

These are the steps to install RosJava on Windows without installing ROS. The links are for Linux I guess, but the procedure is very similar.
 
 
cd rosjava_core
( The installation says git checkout -b hydro origin/hydro, but that's an old release )
git checkout -b kinetic origin/kinetic
if it already exists
git checkout kinetic
 
Step 2. Build ros_code using Gradle ( http://rosjava.github.io/rosjava_core/latest/building.html )
 
cd rosjava_core
gradlew install
 
Step 3. Build the documentation ( A few steps failed when creating the docs in Windows, I didn't dig into the details ).
gradlew docs
 
Step 4. Run the tests ( 142 test completed, 8 failed. Same here. I didn't dig into the details of the 8 failing tests ).
gradlew test
 
Step 5. Generate Eclipse project files
gradlew eclipse
 
In the examples you can find Client.java and Server.java
and also
Listener.java and Talker.java
 
I guess they could be used as templates for building the bridge.
 
I have an instance of ROS runnnig on my Raspberry, but have only made very basic tests using the messaging system. Steep learning curve.
 
So I'm very far from using ROS, but I think it contains some good stuff, so a bridge would be a good thing to have in MRL.

AutonomicPerfe…

7 years 5 months ago

In reply to by Mats

Hello Mats, and thanks for the info.
I've used rosjava before (attempted a bridge awhile ago), and I did get it to communicate with Ros successfully. Unfortunately, rosjava is no longer under active development and is only compatible with, I believe, Ros Jade. Kinetic is the current release with lunar being released soon. There has been work with Rosjava 2.0 (newer, more distributed Ros version semi compatible with normal Ros), but it's still in early development. In addition, rosjava requires the generation of message types to be compatible with Ros networks, something that is not quite perfect. Rosjava would be perfect for a Ros bridge from mrl, as long as the network is Jade or older (hydro or groovy). I'll take a look at my old bridge code to see if I can make it work this time. Thanks for the info, mats!

Thanks, GroG! Mrlpy is worky once more. I will be changing PythonProxy to Proxy and abstractifying handshake and message communication so that Proxy may be subclassed. Thanks!

juerg

7 years 1 month ago

Hi Perfectionist

I do not know how to contact you directly so I give it a try through this post ...

I know that Kevin spent some time on object recognition in images, however the setup in mrl is a bit cumbersome and many of my objects are declared as pencil sharpener (haven't seen one in decades ...) and does not provide position and distance.

With the great examples (e.g. 007 video) on the yolo page I thought it would be nice to have a python exe doing object classification with the yolo library and exchange commands/results with mrl python snippets  through rpyc.

As there look to be a few ways how to integrate that library into python they all come with instructions not really followable by me. I am used to "pip install <library>" and then simply import that into my code.

So is creating a pip installation file a tedious and time consuming task or what is the reason, the simple way is not available?

Thanks, Juerg

Took a look at Yolo. Pretty interesting stuff there, be pretty good for MRL. It seems to be based on DarkNet with some additional models and configuration. Unfortunately, DarkNet is actually a C library, not just normal Python. This makes installation via Pip tricky, since Pip doesn't segment packages based on architecture. There are three options:

1. The Pip distribution tarball must contain compiled binaries for every architecture (extremely bad practice and very bloaty) and the Python bridge decides which architecture to use (difficult, requires runtime-modification of Python path, also a bad practice)

2. The Pip installation file (setup.py) invokes the system compiler on installation. This is what other projects, such as pyobjc, use. Only problem is it requires the user to have a proper compiler toolchain installed before attempting installation

3. An alternative package manager that supports architecture segmentation is used instead of Pip. This is the best practice to use, but causes problems with dependencies: cannot depend on any Pip package, and Pip packages cannot depend on it. You'll have to convert a Pip package (containing the Python bridge code) to a Debian source archive and then inject the compiled DarkNet (and Yolo) binaries into that before compiling the archive into a full .deb file (not as scary as it sounds).

 

My suggestion is to wrap DarkNet (and Yolo) into a Debian package format, compile the C sources for the architecture you need, and then upload to a PPA (Ubuntu only) so that others may simply invoke "sudo apt-get install yolo". Unfortunately, there is no way to bypass the compilation step as there are no precompiled binaries available.

If you tell me the architecture and platform you are working with, I might be able to compile it for you and wrap it up into a nice and neat .deb file for easy installation. I have ARM, x86, x86-64 and PPC computers (all Linux), and a 64-bit Mac virtual machine (oh, how I hate that thing). Those are the architectures and platforms I can build for. I've already compiled and ran Yolo on my 64-bit Linux machine, and it worked... Just really slowly cause I'm cheap and pulled my computer's parts from the dumpster ;)

After installation of the library using any of those 3 methods, you should be able to simply import and use the library like any other. See this for info on the Python API

Side-note: The developer of DarkNet (and hence Yolo) has only tested compilation on Linux and Mac, so I have no idea if it'll even compile for Windows. It may work if platform-dependent code is removed (such as CUDA GPU acceleration), but that would affect preformance severely.

Side-note 2: I've had the same issue with pencil sharpeners being the only thing recognized. I believe I fixed it by switching models from tiny-Yolo to the normal Yolo model, but I'm not sure (worked with Yolo awhile ago).

Side-note 3 (I promise this is the last one): Since DarkNet is a C library, it's possible to call the functions directly from Java, bypassing the entire Python side of things. Could be done through either JNI (requires additional C source files to be created to create the bridge) or JNA (allows calling the C functions directly, so no need for separate bridge code). I'll look into these soon, but if it's possible it should increase performance greatly and will solve the installation problems (can be packaged directly with the other MRL dependencies).

Once all this is ready, I'll try adding this as a native service to MRL, and in the future anyone who wants to use Yolo can simply start the service and use normal MRL conventions (might even be possible to build an OpenCV filter for it, since DarkNet uses OpenCV for webcam access)

Loving the idea of Yolo in MRL!

P.S. My email is bwtbutler@hotmail.com for future reference. I'll add that to my profile quick