Thanks to Dom and Anthony, Here is what I've done :
Ask the robot what he can see, capture a picture with opencv, launch a yolo detection on the picture and results are exported to a text file. Robot can now just read the text to say what he can see :D
Awesome Job! I did something
Awesome Job!
I did something similar with TesseractOcr where I add an OOB to ProgramAB to respond to "Read This". It then takes a photo and processed it, but instead of exporting to a text file I just set it to a string in python and then process it with text to speech.
I look forward to doing this same thing with Junior very soon. Does it find more than a single item?
Kyle
Yes I saw your video, it’s
Yes I saw your video, it’s impressive :)
Of course it’s possible to detect multiple things on the same picture !
Nice job! I guess with the
Nice job! I guess with the capture instead of a video stream the search can even be done without a big gpu in reasonable time?
Prediction took 0,04sec with
Prediction took 0,04sec with a gtx 960 so i think it runs great on lower GPU. It takes more than 4 seconds with cpu only. Keep in mind this test used yolo-voc definition and not the tiny-yolo-voc def. Tiny yolo is low quality definition file for GPU with 1GB of memory . I will use coco-voc or yolo-9000 definition file which require 4GB of video memory.
Very cool! Hopefully the
Very cool!
Hopefully the tablet Lenovo in InMoov will be able to handle it too...
Any script available on Github to test? Otherwise I will wait to see you in Makerfaire Lille!
Cool!
Very cool, what you've done here! From what I see in your screenshot, it looks like you're calling an external program after saving the image to disk. This should work for one-off requests, but for anything faster or more frequent, you'll want to do everything in-memory. I'll be doing a lot of work on this for Nixie, using mrlpy as the interface between YOLO and MRL. Once this is implemented, one should be able to simply start a YOLO service and subscribe to topics pushing object classifications. Until this is finished, however, I'll be using your solution. To aid in using the data from YOLO, you could edit the example code and have it output a comma-separated list of classifications that could easily be parsed by MRL, instead of the human-readable text that it outputs now.
Really excited to see what we can do with YOLO!
Nice work
Nice work Bretzel
Really looking forward to seeing the fruits of your labour
Shout if you're looking for any steer on the current challenges you metioned yesterday with the Python script
It will be great tool
Thank You for posting it, It will be great toll !
Here is the beast ! STX
Here is the beast !
STX motherboard with GTX 1060 MXM GPU.
Born to fit in the robot !
Ready for integrated Deep Learning
Wow, amazing build you've got
Wow, amazing build you've got there! Defintely ready for some deep learning fun! ;)
I'm curious, what specs does the motherboard have? I'd like to know how much power you can cram into one of these things. Trying to decide what board form factors I should be looking at, the STX looks just right though! :)
The motherboard has a Micro
The motherboard has a Micro STX format with a mxm port. The board is bigger than standard stx.
i3 7300T (maybe i5 later) with 16GB DDR4 and a GTX 1060 6GB. It’s possible to use an i7. Maybe no more than 65W TDP
Here is Myrobotlab with yolo
Here is Myrobotlab with yolo (cpu only) for windows x64 systems :
https://1drv.ms/u/s!AuxWJ2KGKWYB0gGjHkgblrajpbVx
MRL needs to be place in c:\myrobotlab. If you want to change the directory, you need to modify yolo.py and start yolo.bat.
There is a "need to install" folder with MSVC 2010 and 2015 runtime. need to install before testing yolo.
Launch only myrobotlab.jar, install opencv runtime and close. launch "start yolo.bat" and look at the python tab.
Dectection starts every 7 seconds (you can change timer at the end of "yolo.py")
We can call it beta version :D
Thanks
Thank You for sharing Your work, I check that ;)
This is better, French sorry
This is better, French sorry ;)
https://youtu.be/GhYlKgAAjpI