I am in the process to create a path finder python program using the kinect and my InMoov cart.

I am currently able to capture the depth image from the kinect and create a top view of obstacles. I also enlarge the obstacles to provide room for my cart before searching a path to my target. Target is currently a fixed location but should in the future be an object identified maybe by kwatters "brain" he tries to add or maybe simpler an optical marker opencv can recognize. I run an ASTAR search and have my own code added to "straighten" out the path. This way I end up with an initial direction the cart should move. A bno055 on the cart allows me to rotate it pointing into that direction. I then intend to send a move command over the serial connection to the carts Arduino. I am still a bit frightened to have the cart move on its own (with the precious load it carries), maybe I better start to think about a remote "emergency stop" button too ....

Now - to kick off the path finder program I would like to use the standard mrl-features with webkitspeechrecognition and programAB do e.g. say "go to the door" (I still kind of would like Marvin to open a door ...).

I assume I could somehow run my separate exe from the python stub called by programAB. But could I also set up a "mrl-listener" im my python program? And maybe also a publisher to see in mrl what my task is doing?

Or should I rethink my plans and run my python code completely within mrl? I am currently using pygame to visualize my solutions.

I run MRL and my exe on a windows 10/64 laptop.

GroG

6 years 10 months ago

Great project juerg.  It's interesting the post just before yours (http://myrobotlab.org/content/services-written-native-python)  is about just what you are suggesting, subscribing to topics in another language and process.

Of course its all about time .. so much fun things to do, so little time...

To save time :

  • if your solution is 'pure' python it should be able to run in mrl - but i suspect getting the image is probably a native-c lib to get the frame from the camera
  • As soon as I finish the Osc Service page, I'll be switching to focus on the WebGui & WebGuiClient work .. the purpose of this is to allow connectivity, registering of services, and the full pub/sub messaging to work in other languages (Python, Javascript, etc) .. but this will take "time"
  • Another possiblity is to upgrade JavaCv to 1.32 and create a filter which would aid your mapping there

Hi GroG

Always having an eye on MRL subjects as it looks!

Read that article you mentioned but also read a bit into  RPyC documentation and to me it looks like something that would be easy to try out and could be sufficient for the moment for what I would like to achieve.

So I hope to get some help with the Aruco library to provide a "real" target.

I know you like pictures so here are some of my puzzle to solve:

first my kinect depth picture:

from the depth info I filter out floor and stuff and get an obstacle top view

it's mostly the back of my chair and some elements on the side

I then enlarge the obstacles to account for the size of my cart. Middle top (red, hard to see) is my start point and the blue marker is the target.

So far this has been in a 640*480 matrix, but for the AStar algo I reduce the size to a quarter to find the path in reasonable time

As the path is a bit ugly I apply a bresenham algo to find a shorter direct path which is finally shown here

This way I get an initial direction for the move to the target and with the help of my bno055 mounted on the cart I can rotate for this direction and start a forward move.

As of now it's not "in action" because it misses the repeated path finding and does not cover the "blind" first 80 cm of the kinect and any obstacle the kinect and my cart-sensors have not detected. I am also a bit afraid of having the cart drive around by itself with my Marvin on top!!!

I have to think about a remote emergency stop button as running behind the cart to switch power off if something goes wrong is nothing what my old bones would like.

Thanks for your all time presence and help and I count on you to not get too much destracted from the path to Mantecore!

 

new issue showed up.

I am able to run rpyc on my stand alone "navigation" process and can communicate with it from an interactive python session using

import rpyc
conn = rpyc.connect("localhost", 18812)
conn.root.navigateTo((300,350))

at first I had an issue with thread locking with the pygame module I am using for visualization but was able to circumvent it with repeated calls to pygame.mainloop(0.1)

So I thought I could start up MRL, go to the python tab and do the same programmatically. However - I get an error when trying to import rpyc in the MRL python service?

I am not that experienced understandig how python accesses modules and trying to read about it just adds up my confusion - but is it due to CPython I am using in my navigate-process and Jython I think MRL is using?
 

There are 3 requirements to get modules to work .

  • They must work with Python/Jython 2.7.0 - because that's the current version we are running
  • They must not contain "native" code (I think)
  • They must be installed into the {mrl folder}/pythonModules

After installing into the pythonModules, you'll need to restart - if all goes well, you "should" be able to use the new Python library...

 

I have it worky now and this is what I did (w10/64):

downloaded the zip from https://pypi.python.org/pypi/rpyc/3.2.3, unzipped it and copied the rpyc folder into my MRL pythonModules folder.

opened my firewall for the port 18812

then to communicate between exe's on my developer PC:

create a new python project and ask pip to install rpyc. add code to talk to my cart's arduino over the usb connection. one command I implemented is "getCartOrientation" which returns the bno055.orientation.x value to  my python program.

The main thread of this program runs this command

ThreadedServer(MsgReaderService, port = 18812).start()

and I have a class MsgReaderService(rpc.Service) with the function

def exposed_getCartOrientation(self)
   arduino.getCartOrientation()
   return orientation

On the MRL-side I did this:

open the python tab in mrl

add this code:
import rpyc
c = rpyc.connect("localhost", 18812)
# and for my usage:
orientation = c.root.getCartDirection()

and TADAAA I get it into MRL.

My MRL-code can also request to "navigateTo(x,y)" which starts the path finding algos of my python exe.

So currently the missing part is the (x, y). As I am in MRL I can use speech recognition and have some fixed locations defined assuming my Robot is at it's base location.

A more preferred solution would be to have the aruco markers as they would allow to get a rough position estimation based on size and orientation of the marker.

Maybe face recognition might enable telling Marvin to approach a certain person.

And with the help of the proposed dl4j integration I might even be able to ask for object approach?

For my concerns about how to be able to stop the cart's motion if something goes wrong I plan to have a monitoring task running on my PC or a tablet to send a heartbeat. Without that heartbeat the cart will not move resp. stop.

Things take its time so be patient to see a video of the action!