kwatters's blog

Localization in MyRobotLab and InMoov - LanguagePacks

kwatters's picture

The current localization in MyRobotLab for the InMoov is based on "language packs".  In an effort to make sure that users from all around the world that speak many languages can understand how to use MyRobotLab, we support having different translations for certain info , error, and status types of messages in myrobotlab.

For the inmoov this was managed though sets of files that are keyed of the "locale" or language.  These files are dictionaries that key off so in the python code like


Emotion detection in faces

kwatters's picture

Gael recently pointed us at the an open source project that had trained a few different neural networks in Keras to detect emotions on detected faces.  More info here.

https://github.com/omar178/Emotion-recognition


Getting started with the nVidia Jetson Nano and MyRobotlab

kwatters's picture

nVidia recently announced a whole line of system of a chip boards (similar to the raspi) that help showcase the nVidia GPU.  These boards are based on an ARM 64 bit CPU architecture and coupled with an nVidia GPU.  (The Nano has a 128 core maxwell GPU.)  It's got 4 GB of ram, and uses a standard microSD card for it's main system storage (by default.)


Lloyd, an evolved InMoov for Telepresence and Augmented Reality

kwatters's picture

It's been a goal of mine(ours?) for a long time to have a proper telepresent InMoov.  I believe Lloyd is that.  There is a new branch of MyRobotLab called "lloyd" based off the develop branch.  This is where I'll be continuing work on my customizations without breaking horribly things that are already working.


New Tracking Algorithms, TLD, MedianFlow, and More!

kwatters's picture

So, in OpenCV 3.x there is a new opencv_tracking module that is exposed via JavaCV. 

This module contains 7 different implementation of object trackers.  There is a new "Tracker" filter that allows you to switch between any of these 7 tracking algorithms.  It should work pretty similar to how the lk tracking works in that it should publish the point that's being tracked.  Right now the filter only supports tracking a single point.  


Robot Memories Solr and InMoov

kwatters's picture

So,  I've been playing around a bit with using Solr as the basis for recording data that flows through MyRobotLab and exposing it for search later on.  Here's a screen shot of the new (very sparse) solr search gui in the swing gui.

 

 


Custom grabbers and OpenCV capture

kwatters's picture

There's been some refactoring of OpenCV to expose a method on it that allows you to pass in a custom frame grabber directly when you tell opencv to start.  This means we can programmatically control the settings on the grabber 

 

Here's a small python example of creating a frame grabber, intializing it with the filename and the api preference and then 

 


Updated joystick example

kwatters's picture

It's come to my attention that the joystick example scripts are pretty far out of date and definitely don't work very well with manticore.

I have updated an example joystick script that will work with manticore

https://github.com/MyRobotLab/pyrobotlab/blob/develop/home/kwatters/joys...

This is a very simple example of the joystick analog stick x axis controlling the speed and direction of the a servo motor.  This is a much more simplified approach than having to deal with the "sweep" function on the servo.