Hello, it's James from XRobots here ( https://www.youtube.com/user/jamesbruton ) !

I've been aware of Myrobotlab for a while but never tried it until now - I didn't realise how versatile it was!. So I'd like to use it for some projects in my YouTube channel which need speech and vision recognition and integrate with Arduino - so it's ideal right?!

On Windows I have no problem making eveything work, I can run the Jar file and access my webcam with the OpenCV capture, use facial recognition and all the fiters. I've dabbled with the Arduino MRL code and all that works.

However I really want to make some mobile robot appliocations using Raspberry Pi. So again I have most of it working with an Arduino and it all looks good running on a Rasp Pi 3B+ with Raspbian Stretch with Desktop.

However I'm having some issues with the cameras. I have both the official Pi cam and another USB webcam. I have done the following:

-Installed the Pi cam and verified it works with the command 'raspistill'

-Installed UV4L and made the Pi cam into a webcam I can browse to from anywhere on the network. This also means I can access the MJPEG stream in Myrobotlab running on the Pi using this method: http://myrobotlab.org/content/pi-camera-streaming-uv4l-pc-mrl  

-I can run the filters like facial recognition using he MJPEG stream, but it drops to about 2fps using this method and it's really laggy. But this means that openCV is working right - because otherwise it wouldn't be able to do image processing?

-However I can't get 'openCV capture' to work in the same way it does on Windows, even though UV4L makes the Pi cam apear as /dev/video0 ( and the second USB camera as /dev/video1 ) I cannot get any of the other input methods to see the camera, either as a file at those locations or just as 'camera 0' etc.

-So I spent a day following this guide: http://myrobotlab.org/content/raspberry-pi-pi-camera-opencv-310-mrl - which says I need to compile openCV separately, which takes a few hours and fails with the same errors that are mentioned. The last one I can't fix is this: https://raspberrypi.stackexchange.com/questions/42283/cant-compile-open… - The fix doesn't seem to actually fix it now, but who knows what else has changed in the last year...


However I don't really understand why I need compile and install openCV separately to make the capture work, since it's the same actual Jar file and source that I used on Windows and that just worked. Why would I need to install openCV and add Python 'import' support for openCV when Myrobotlab is written in Java anyway?. Also I know that the actual openCV processing algorithms work fine on the Pi since they work with the Pi cam using the MJPEG stream!

All I really want is direct camera access on the Pi instead of working around it and using the MJPEG stream. From other posts it looks like 'openCV capture' is quicker and it'll do processing at around 9/10 fps.

Can someone tell me how this actually hangs together please?






5 years 8 months ago

Hello James,

First off, I like your videos and have been a long time subscriber. :-)

Getting the camera on the Raspberry Pi 3 with the Manticore version of MyRobotLab (MRL) was something of a challenge.

This however was solved for the Nixie version of MRL.

You can get the current version of by clicking on the latest link just above the message box in the shoutbox.

Just be advised, that Nixie is currently a work in progress, so features may change as they are improved on.

I have found the native Raspi camera support to be fully working on the stretch version of Raspian starting with the Alpha releases of Nixie Out Of the Box so to speak.

Give Nixe a try and if you are still having trouble, then by all mean, drop back in here for a bit of help.

And Have fun making :-)


This is begining to "smells" a lot like the problem i am currently having on my jetson TX2, but couldent find a UV4L to play around with for it :/

Just can't understand why we can't directly access dev/video0, video1, etc...

Ok this is just a bit beyond my abilities at this time, however, based on what I have learned from the shoutbox over the last year or so,

The .so files required for the frame grabber support within the Raspberry Pi 3 (Raspi3) had at the time not been compiled and added.

This was more of a time constraint on the developers who did eventually create compile the frame grabber binaries for the java system.

These were then included with the updated maven build of MRL.

Now that MRL is using Maven (Nixie) updates from the original creators of the likes of OpenCV for the java environment will make it in more frequently.

With the demand for the frame grabber binaries for the Raspi back at the source of the OpenCV project, the binaries were finally created and these were then added to MRL with the Maven upgrade.

While there has been great development with the OpenCV on the Raspi3, we still don't yet have the YOLO running on it, but given a bit more time, I have no doubt we will get it.

As for the TX2, that is a very new product and as you are finding there isn't yet much support for it.
I know that you have made great progress towards getting OpenCV working on the TX2 under MRL, and i really do hope you succeed.  Success there will mean you will have the understanding to possible compile the binaries for other systems as they come up,  You are after all, considered by some of us as one of the gurus.. :-)

Hmm, strange because it's working for me .. W10 X64 and java 64. When switching between two versions, you need to delete every MRL files in the MRL folder


Nixie is work in progress, maybe the latest has a bug, but give me a email adr and i will send you the current one i am using on my TX2, it runns without any errors and the gui is just fine :)

So I got the newest version from Spawn32, but now the following happens:

On Windows I get the following errors and then nothing happens until I close the command line Window. It then launches the GUI, however when I try to install any components the 'downloading' window remains blank and it never finishes:


On Rasp Pi I get the same error and it never launches the GUI.

Both are in totally new folders.

I can still launch the Manticore version fine on both platforms btw

Hello James who makes Robots!  Welcome yo MyRobotLab.  I've been a supporter of yours on patreon for quite some time now.  Your work has been very inspriational for people around the globe.

Ok, now to the good stuff.  Where to begin..   Ok.. so the official release of MyRobotLab is still the "Manticore" release and that is more stable than the current Nixie release.  However, with Manticore, there was an issue with the frame grabber for OpenCV on the Raspberry PI 3.  The work around that I used there was to setup an MJpegStreamer and had OpenCV attach to that as a video over IP sort of video source using the MJpeg frame grabber and specifying the URL for the video feed.

The good news is that in the Nixie release, we updated to a more recent version fo JavaCV (the OpenCV binding for java version 1.4.1) and that resolved the frame grabber issue on the RasPI 3.  It sounds like you're on the bleeding edge and using Nixie now, so great, let's continue there.

So, when installing MyRobotLab we recommend something like the following steps.

1. create a new directory  ~/nixie or other.  (for this example, i'll use "nixie" as the directory that you install MyRobotLab into.

2. download the myrobotlab.jar file and place it in the "nixie" directory.

3. Very important.. if you're running a 64 bit operating system make sure you have the 64 bit Java runtime environment (or JDK).  If you have a 32 bit operating system make sure you are using the 32 bit version of the Java runtime.  If you are using the wrong "bitness" the wrong native libraries will be downloaded and things will throw odd errors.

4. assuming the right java is installed..  cd ~/nixie

5. run the following command   "java -jar myrobotlab.jar -install"    This command will download and install all of the dependencies for all of the services in MyRobotLab.  We use "ivy" to download these dependencies and you might notice that in the ~/.ivy2 directory there will be a cache of all the various jar files that we pull in. (nearly 700MB worth at this point)  Subsequent installs will use this cache directory and should go quicker as you move from release to release.

6. Once MyRobotLab has finished installing, it should exit and return to the command line.  to stary myrobotlab you can / should run the following command  "java -jar myrobotlab.jar"  ... This should bring up the gui...  you can point an click to start the opencv service.. and click "capture" ... Assuming you've got a standard webcam , it should just "work" ...  


Ok.. now, when MyRobotLab runs, and does it's install a directory  called   nixie/.myrobotlab will be created.  In this directory is the serviceData.json  (this file tells MyRobotLab which services are installed.)  Additionally, services might save their state in a file with the service name .json will be created in the directory.  Something like  "opencv.json".   

The ERROR message you posted looked like the service state json file was corrupted.  Perhaps, when you changed the version of myrobotlab, you might have installed into the same directory and maybe the .myrobotlab directory already existed there, so there was a problem deserializing the json ...  

So..   to fix the error you posted.  go into the 

.myrobotlab directory and delete all files execpt for the "serviceData.json" file..  Or just nuke the entire directory and start over with the java -jar myrobotlab.jar -install.    This will fix the error in your screen capture.


Now.. to some of your other issues.. yes.. Face Recognition on the RasPI3 is a bit CPU intensive and yes, the frame rate does drop pretty low.  I have started updating some of the filters to use a separate classification thread so it doesn't slow down the frame rate, but we haven't yet done this for the Face Recognition filter yet.  (This is on our radar to do, and if you're interested in using it, then I'll bump it to the top of the TODO list.)


We're all very exited to have you around and I'm sure as our free time permits, we will do everything possible to make sure you're successful.  

Keep up the great work!


P.S. The "myrobotlab.log" file often includes some additional debugging information .. there is a link in the "help" menu of the gui that you can click that says "Help! It no worky!"  that button will send the log file to a server where we can look at it, we might recognize some of the errors pretty quickly.  

Ah, after I wrote the previous post, I realized I left out some pretty important raspi3 details.  I forgot about them because it's been quite a while since I set up my raspi.  but.. i remember when first getting MRL to work on the RasPI3, I need to make sure some system libraries were in place.  The one in particular that seemed to be the key for me was 

apt-get install libunicap2 

In addition to that there might be some other libraries / system packages to install to get all of the dependencies for OpenCV to work properly.  Here's a link to a very old post, but some of it is still relevant




So, I have it (almost) working.

I started with a brand new Raspbian Stretch SD image.

I used the Nixie version of Myrobotlab and it runs ok, and it updated ok.

As per your links, I ran:

sudo apt-get -y install openjdk-7-jdk libunicap2 libjpeg8-dev libv4l-dev libjpeg62-dev libv4l-dev libavcodec-dev libavcodec libjpeg62-dev libv4l-dev libdc1394-22-dev libavformat53 libswscale2 

sudo apt-get install libunicap2

Then I plugged in a USB webcam, and checked it was there as /dev/video0

Then straight away OpenCV capture works ok in Myrobotlab, I get about 15fps from the USB webcam and 9/10 fps if I run face detection.

Then I tried to make it work with the Pi Cam. So I installed all the stuff for UV4l, as per: https://www.linux-projects.org/uv4l/installation/

Now I have both /dev/video0 and /dev/video1 and I can stream the Pi Cam using the webinterface on http://{rasp pi IP address}:8080 etc

However now, neither camera work using OpenCV capture in Myrobotlab. The Java console says it's beginning frame capture etc, but there's no image - just the logo that's there before you start capture.


So it seems pretty close, and the OpenCv capture is much more efficient than the MJPEG stream processing. Even though there is some lag in the OpenCV capture it's not cumulative like the MJPEG capture - where the delay seems to get bigger and bigger.

This isn't relly an issue, since USB webcams are probably cheaper than Pi Cams and have longer more flexible cables. But if anyone knows how to get the actual Pi Cam to work then that would be handy (since i have one, and the 450mm cable  ;-)

Hi James,

This seems to be very similar to what i am looking into right now, have the same exact thing on the Jetson TX2 using the "OpenCv capture", even if it seems to be running you wil se if you open another terminal window and do a dmesg -wH that it just powers the cam, starts the stream, stops stream and then power off (if there are any debug msg in the raspi cam driver).

So right now i am trying to figure out if the "OpenCv capture / grabber " is internal in the opencv jar source files or it's own module, we need to fix this :)

Hi James,

  I finally had a chance to test out the raspi camera with the standard OpenCV frame grabber on the Ras PI 3.

Good news is, it just "worked' for me out of the box.. here's the steps I took

1. connect the pi cam module with it's ribbon cable to the ras pi.

2. enable the camera with the standard  "raspi-config"  by selecting the "interfacing options" and subsqeuently "P1 Camera"  and selecting that the interface should be enabled.

3. enable the V4L driver so it shows up as a typical /dev/video0 device by loading the following module to the kernel  "sudo modprobe bcm2835-v4l2"

4. start MyRobotLab as normal   ( java -jar myrobotlab.jar )  

5. launch an OpenCV service from the runtime tab.

6. select camera index 0 (it's default) and the OpenCV frame grabber from the dropdown  (also default) and clicking "capture"


Magic..  it was all worky ..

I suspect that uv4l install is probably not what you want to do here..   especially if it's setting up it's own video stream, i suspect that process is holding onto the /dev/video0  device,, so other applications can't use it.  (just a thought.)


Anyway, good luck!  I didn't get a chance to check the frame rate, it seemed reasonably fast.. though there was still almost a 1 second delay in the video and that increases if i'm remotely connected via VNC Viewer to the raspi. 


It's still on the list to do the face recognition on a separate thread from the frame grabber that will help with the main frame rate issue (so some degree..)