I'm using the Raspberry PI3 as the "brain" in my InMoov. The main reason is that it's small and cheap.

I have successfully used the PI camera to stream video, and that works pretty well. 

https://www.youtube.com/watch?v=gWTtCBu6nUc

Depending on how you configure the camera, you can capture up to 90fps. So the camera is good.

You can also install UV4L to be able to stream from the PI: https://www.linux-projects.org/uv4l/installation/

However, most programs for using a camera are built to support a web camera and not the PI camera. 

There is a very easy workaround for that. The latest raspbian ( Jessie or Stretch ) both has the v4l2 driver installed, but not enabled. The v4l2 driver makes the PI camera show up as /dev/video0 just like a webcam. All you have to do is edit the /etc/modules file and add a line: bcm2835-v4l2 and then reboot. 

So we now have a good camera, but my experience is that OpenCv in MRL is a bit slow on the PI.

My next step to get the PI performance up to shape, is to use a brand new device called the Movidius Compute stick. It's a USB stick, similar to a USB storage device, but instead of containing a lot of storage, it contains a lot of computing power, primarily intended to run Deep Learning networks.

You can read about the compute stick here: https://developer.movidius.com/

I don't have this device yet, but I ordered one in August, and I should receive it some time soon. 

The first step is to install Ubuntu 16.04. You can find it here: https://www.ubuntu.com/download/desktop

The latest release of the software from Movidius can also run in a Virtual instance of Ubunty, but that was not possible when I started, so I made a dual boot system.

I also downloaded and installed the NC SKD as described here: https://developer.movidius.com/start

The installation ws easy, but some steps failed, because the software was unable to find the compute stick. Well, that will probably work better when I have received one :)

I amade the same installation on the Raspberry PI. That was a little more challenging. The installation script stopped because of some strange files that were installed with XRDP. So I had to edit away some tests that just checked if I already had the software installed. 

The installation on the PI downloads and compiles OpenCV 3.3 and creates bindings for Python3.

The installation takes a lot of time, and the compilation of the examples also failed, because of the missing USB compute stick. 

So when I receive the compute stick, I will redo the install steps. The examples show how to use a USB camera. So one of the tests that I will do is to use the PI camera. I will also test to see if I can get better performance from OpenCv using some compiler options as described in this post: https://www.pyimagesearch.com/2017/10/09/optimizing-opencv-on-the-raspb…

The compute stick only supports Caffe Deep lerning networks on the PI, so I will try to learn how to build that type of networks. As somebody pointed out some time ago, Yolo running on Darknet is a fast object detection algorithm, so it would be nice to be able to use it: https://www.ted.com/talks/joseph_redmon_how_a_computer_learns_to_recogn…

So I was really happy to see this:

https://ncsforum.movidius.com/categories/general-discussions

https://github.com/gudovskiy/yoloNCS

 I will continue to write on this post, as soon as I make some progress.

 

That is true. However not on the Raspberry PI, since Tensoflow is not officially supported on the PI. At least that's what it says when I execute the install scripts on the PI.

Perhaps that will change in the future. So for now I will test with Caffe.