So One of my new years resolutions was to add some deep learning support in MyRobotLab. Reality is, this is a big topic and covers many aspects, but at the end of the day I boil it down to this.
- get some training data
- build a model
- use that model on new data in real time.
The face recognizer opencv filter basically does exactly this. It creates a labeled training data set. It uses that data set to create a model to classify faces that have been recognized. That model is used to classify new faces detected...
The building a model part is what is in question. These days there are a bunch of open source frameworks for doing "deep learning". This is really just another way of saying "building a model off of training datasets". ...
So.. MyRobotlab is written in Java... It makes sense to use a java based framework for doing this model building.. The nature of this project is open source, so that means we should be looking for an open source java based framework. Well, amazingly, there's an awesome framework for that called "deeplearning4j" ...
Alright, the path forward...
borg it in!
... That means, we need to resolve common dependencies between the builds, as there is much overlap. the first major one is JavaCV . ( OpenCV ) needs to be updated to the version that deeplearning4j uses (1.3). In addition to that there's a bunch of other jar files that dl4j requires.that we already include slightly different versions of... (this is where a maven based build for myrobotlab might make things easier.. but that's a topic for another blog post.)
I'm in the process of updating javacv, (sorry, I broke the build..that will be fixed soon.) Once that is done, we'll be working towards an interface between the OpenCV service and the Deeplearning4j service.
In the short term, I'm using the AnimalClassifier example from the dl4j-examples github project as the basis for how we borg it in. The idea is , we should be able to provide a bunch of example images to it.. it could be able to look at those images and learn what's in them.
We should be able to pass it a new image (from opencv) to the dl4j service and it should be able to return back what it recognized in the image (based on what it's been trained on..)
Ok... so I'm mostly rambling at this point but, I wanted to make a blog post to let the community know why I'm updating OpenCV to a newer version... :)
Fun & Big stuff coming up !
In my opinion this is exactly
In my opinion this is exactly what myrobotlab lacks between programmAB an OpenCV to be able to create a robot with its own carachter and reactions instead of a better remote controlled pile of servos. I would be excited to try it out once my inMoov is finally working (my priority at the moment is: work work > 3d printer > inmoov hardware > software/myrobotlab.
Keep it up!
NativePython
Not sure if I should post this here, but it's related and I accidently pressed enter so here goes :)
I've also been thinking of deep learning in MRL. I was considering adding a tensorflow service, but the implementation is in python, and jython is painfully slow compared to cpython. Considering there are many interesting robotics-related projects written in python, such as pocketsphinx, ros, and Google Assistant SDK, I thought it would be interesting to allow MRL to hook into native python code. As such, I've been working on a NativePython service and a pure-python API leveraging MRL's http API. So far, everything seems to be looking good. MRL is capable of calling native python code and native python code can easily hook into MRL using the same syntax as the jython API, thanks to proxy classes that are generated on the fly (caching these in memory, exporting to files is on the todo list). The API is on pypi and installable through pip as mrlpy. I will be pushing the new API version soon, and I will be working to further integrate NativePython so that it doesn't look different to mrl's messaging system. After that, I will be adding the pocketsphinx service to give a little more freedom in picking stt engines. Excited to see what deep learning can do for MRL!
On a side note, for future reference, how does one delete a comment? :/
Thanks!
why tensorflow, why not deeplearning4j?
MRL is written in Java, so deeplearning4j is native to it. You can always stand up another process running tensorflow and native python.. but then you've got to maintain 2 processes running.
Deeplearning4j has the ability to specify, train and use neural networks just like tensorflow.
I'm curious, is there a specific algorithm that you are trying to use in tensorflow ? I'd be willing to bet the same algorithm also exists in Deeplearning4j. Deeplearning4j also has support for GPU optimization, and distributed processing via an intergration with Spark.
I'd love to hear more about what you want to do with tensorflow, I have already had some great success implementing a new facial recognizer based on dl4j and training a CNN (convolutional neural network)
I hope to contribute my Deeplearning4j stuff back to MRL over the next few weeks..
I didn't have any specific
I didn't have any specific algorithms in mind, I just knew tensor flow was quite capable machine learning wise. I had never heard of deep learning 4j, but I had used the neat and hyperneat neural evolution algorithms before and was considering doing something similar in tensor flow. The idea was to build a movement preprocessor that implemented a sort of personality matrix, so that InMoov would move in slightly different ways depending on it's personality settings. The training data would have been taken from YouTube videos or similar with skeleton tracking and a tag specifying what type of personality the person was emoting. As for why a Native Python service, I specifically needed an stt engine that could operate without WiFi, and Sphinx is far inadequate. Pocket Sphinx, on the other hand, is quite accurate for an offline stt engine. I also wanted to implement a Google Assistant service, and the easiest way to do so was through a Native Python service.
As for the process handling, the processes are synced through a handshake sequence. Terminating the native service also releases the Native Python proxy class, and vice versa. If communication is broken, the native service raises an exception, as does the Native Python service unless configured otherwise
Looking at recent commits, it looks like someone else has a similar idea with the Proxy service class.
I'll take a look at deep learning 4j, but it sounds like tensor flow isn't needed. Thanks for enduring my ramblings :)