Pioneers are the guys with the arrows in their backs. GroG can probably tell you all about that.
If you count the time I drew a face on a box and taped it to a toy car, I've been building robots for 45 years. Problem was, for most of that time, the electronics and mechanical components were expensive and didn't do all that much. Also, finding kindred spirits was difficult. I capered like a drunk monk the day I got the mutant serial port of a Vic 20 to control the speed of a toy motor. REAL computer control of a motor! All I needed was another Vic 20 for differential control!
Arduino's and motor controller shields sure have made motion easy. It's nothing to plug a servo and some sort of ultrasonic or infrared sensor into an Arduino, add a few lines of code and have them working together.
But, we were doing that in the 80s. Parallax and Basic Stamps have been around for a long time. And once upon a time, Polaroid had a nice ultrasonic developement kit that went into a few robots. Stuff cost more, and was (arguably) harder to use, but it was there. When you come down to it--for the longest time the average hobby robot wasn't doing much more than they've been doing since the 80's.
Then, some really cool things happened in the field of robot vision. Hobbyists noticed OpenCV and figured out how to use it. And Lady Ada risked the wrath of God and put up a bounty for the secrets of Kinect.
For the first time, hobby robots could SEE. REALLY see. And understand what they saw.
When I started looking into the Microsoft Kinect, I found a simliar device called the Xtion Pro Live, smaller, lighter, simpler power requirements--And that's where I decided to put my money. But when I finally got one, I quickly figured out that I had no idea what to do with it. I finally found SimpleOpenNI, and after a little work--My computer, (and therefore, a computer controlled robot) could see! Recognize human figures. Sense depth visually. Who needed ultrasonic sensors?
But. . .It still wasn't exactly easy. You had to be pretty good with C code. Well, SimpleOpenNI wrapped OpenNI in Processing, but--Hey wait a minute--What's this MyRobotLab stuff?
Through MyRobotLab, GroG makes OpenCV and OpenNI as easy as clicking buttons. Well, ok there's a little more to it than that. You have to do some Jython (Python) coding to get anything interactive happening. And, like much of the rest of MyRobotLab, it's still a work in progress. But progress was being made.
Uh. . .well, it worked if you had a Kinect. And it worked best under Windows. Java runs on Linux and Windows, so it SHOULD work under Linux.
Heh. Devil is in the details.
I had Grog's interest. But GroG didn't have an Xtion, and Ubuntu is sometimes trickier than Windows. And GroG is only one man. With a family. And a real job. And there are other components of MyRobotLab that need attention. Not everyone has a Kinect, (and fewer still have an Xtion.) And OpenCV worked--until the last video driver update. (Thanks Mark!) Oh I know GroG will get back to it. But. . .*sigh* after all this--I STILL had a blind robot.
What to do. . .
Well. . .SimpleOpenNI runs under Processing--And it WORKS. And Processing runs under Java--Just like MyRobotLab So, OpenNI should work, right?
Heh. Pesky details.
But, with a little fumbling and experimenting, I managed to install a missing freeglut3 file, and get the OpenNI examples to run through the command line. Then, when I compared library files between OpenNI and MyRobotLab I noticed they were different. Hmm. . .how different? Copy, paste and YES dammit--replace.
This is what I got with the capture button.
That's my arm. In MyRobotLab. My robot can see my arm in MyRobotLab. MY ROBOT CAN SEE MY ARM IN MYROBOTLAB!
Uh. sorry about that. Caps lock got stuck.
It's NOT a solution. This is only one OpenNI function working, and it wasn't in real time. I couldn't see the image until I hit "stop capture."
But it's progress. Up until a few hours ago, I didn't know if MyRobotLab would EVER talk to MyXtionProLive. Now I have an arm pointing the way.
Here are the exact steps I took to do it. NOTE--These instructions are specific to 32 bit Ubuntu 12.04, and an Asus Xtion Pro Live. You shouldn't have to do any of this for Windows a Kinect, and. . .something different if you have Linux and a Kinect. Maybe it's already working for you.
Download the unstable (most recent) OpenNI, middleware (NITE) and hardware binaries from
Unzip each module into a folder. (They should unzip into separate folders.)
Navigate to each modules folder and in a terminal, type "sudo ./install.sh" (No quotes) The installers should run.
In a terminal cd to the /OpenNI-Bin-Dev-(etc)/Samples/Bin/x##-Release folder (the exact path will depend on where you unzipped, and what version of the software, platform--ie 64 or 32 bit you downloaded.)
Once you are in the Release folder, type ./NiViewer. You should see a splitscreen of a relatively normal image, alongside a depth image.
I didn't get that. I got an error message. "error while loading shared libraries: libglut.so.3: cannot open shared object file: No such file or directory" I solved that by installing freeglut3 from the Ubuntu Software Manager.
If you can see the image as described--you've got OpenNI properly installed. Now to get it to work with MyRobotLab.
Run MyRobotLab and install the OpenNI service. Close MyRobotLab after it restarts.
Copy all the files from /OpenNI-Dev-Bin###/lib, and paste into your MyRobotLab's lib folder. (If it's a bleeding edge download, it will be "intermediate.#####/lib." Once you paste the files, you'll be informed that they already exist and asked if you really want to replace them. Yes you do. (dammit)
Start MyRobotLab, start the OpenNI service. You'll probably have to detach the service to see anything--which should be a green and blue checkerboard.
Position your Xtion sensor at a stationary target, and click capture. Then click stop capture.
If all goes well, you should see. . .some depth specific interpretation of your target. If you see anything else, click help, and about, and shoot GroG some no-worky love.