nomasfilms asked a great question
"Has anyone interfaced MyRobotLab
It is my plan to incorporate several AI services into MyRobotLab (MRL) . There are so many ! So identifying possible canidates would be the first task.
I can think of 2 immediate challenges in which the AI services might provide solutions. The first is a general chatbot. This could be combined with the Speech and Sphinx services in order to try to provide semi-interesting conversing. I'm very interested in the "learning" aspects where data is added and organized on a continuous and dynamic way. I would like to create a chatbot which interacts with myrobotlab.org's shoutbox - and preserves responses in a publicly accessable database. I have created a user already for this purpose and found a great tutorial going into some of the rudimentary depths of chatbot AI (http://www.codeproject.com/Articles/36106/Chatbot-Tutorial)
- List of source forge A.I. projects many are Java which makes integration "almost" trivial
- List of AI Java projects
Thank you for blogging on this and opening this up for discussion. I believe this type of machine learning opens up all types of possiblities in robotics. As I am interested in android development such as InMoov (Gael's incredible creation), and not only would I like AI to chat with, but incorporated into movement control, recognition and MUCH more. If this software can learn to hold a basic conversation (in any language) in a few minutes, which I have experimented with, why not learn certain movements too? Lets say the head is tracking a voice via stereo audio differential now, why not use AI to quickly learn the correct method. Right channel audio louder, turn left.. NO, then turn right, better. Wont ever forget that again. Vision tracking, head tracking, displaying gestures and moods, navigation, all should be possible with this type of simple AI. Connected to the internet? Wiki lookup and text to speech, access to nearly unlimited facts, data, and conversions, etc. quickly gives our android a reason to live. Downloadable knowledge bases already exsist for some of these 'Bots, an idea that makes basic learning quick and easy. I think this could be right around the corner. Thanks again, Chris
Ok, great comment, let's get
Ok, great comment, let's get started.
MRL can currently recognize speech and can affect motion and control with speech commands. I'm currently actively working on Vision, which includes object identification. Object identification from a video stream is a challenge. I believe it has a lot of potential too.
Another thing I am very interested in doing is getting MRL to run continuously with some learning programs. Specifically chatbot learning & object identification.
I'd like to implement more, so I thought I'd ask you some questions:
Do you have experience with other potential software canidates besides Braina (preferrably open source & Java)? I was curious to your personal background in robotics & AI. Are you an experimenter, use Arduino, or experimented with MRL ? Do you plan to? Your comments and questions have peaked my interest, but I'd like to know about your background and goals. One of the things which most quickly drives development is concrete projects like InMoov or Houston.
AI and Me...
Hey Grog. Only other experience I have with AI so far is an App from International Language Machines. I have no JAVA experience what so ever. My backgound is mostly electro-mechanical. I built robots as a kid, and I guess I am revisiting my childhood. My only programing experience is in BASIC. I have just started learning MRL, and it is VERY interesting to me. I found Gaels work recently, and I'm considering building some form of android using at least some of his design. As far as I can tell, he is at the absolute top of the hobby android game. My plan currently is to get an Arduino experimenters kit, and learn to interface it with MRL. I'm not without electrical and electronic experience, and I have a patent on an electronic musical instrument I invented and constructed. I commented because I think I can see where this new technology is headed, and I would love to be part of it. I wish I could offer more programing expertise. I'm sure that's where you need the most help. It certainly is a hell of a lot easier to simply say " Gee, I think Kinect is the best hope for human motion tracking, and should be incorporated", then to actually develop the code. I hope to offer all I can to aid in the development of MRL. I did review the list of JAVA programs that you posted, and some were amazing. I saw a program that can learn shapes, along with their names and adjectives all by watching video! How cool is that? Unfortunately, some of these offerings might take a doctorate in AI to understand. I think your current idea of modules for various functions is perfect, and I can imagine it will take several modules incorporated and communicating to equal one viable android brain. Chris
Thanks for your information,
Thanks for your information, I'm really looking forward to seeing your projects. If you see something A.I. related and "cool" please post the link in a comment here (Specifically, I'd like to look at the video / shape names & adjectives - have not found it yet) It's definately helpful for development of MRL to have links and lists of resources like you mention.
Also, when you get to a point where you need help with MRL let me know. Sadly, I have not had time to make any good video tutorials covering the basics - Hmm.. speaking of which, with your user name I'm guessing you have "film" experience Hmmmm... nudge nudge.. wink wink ;)
Here's a few links. First,
Here's a few links. First, from the A.L.I.C.E.bot page, http://alicebot.blogspot.com/ , comes Program AB:
"Motivated by the technological and social changes of this mobile era, the ALICE A.I. Foundation has released a new, free, open source Java implementation of the AIML 2.0 Draft Specification. "
"Significantly, Program AB implements some memory and storage optimizations that make it possible to run a sophisticated chatbot on a mobile device or embedded system. Versions of Program AB have already been tested on Android phones and tablets, as well as the Raspberry Pi single-board computer."
Next, here's the link for the Video learning Java project: http://freecode.com/projects/ebla
"Experience-Based Language Acquisition (EBLA) is an open computational framework for visual perception and grounded language acquisition. It can "watch" a series of short videos and acquire a simple language of nouns and verbs corresponding to the objects and object-object relations in those videos. Upon acquiring this protolanguage, it can perform basic scene analysis to generate descriptions of novel videos. While there have been several systems capable of learning object or event labels for videos, this is the first known system to acquire both nouns and verbs using a grounded computer vision system"
It seems as though it wont be long before your robot can tell you when the cat is up on the table... :-)
More soon, Chris
That's great nomasfilms ! I
That's great nomasfilms !
I was looking into Program D - which looks like its an older derivative versus AB.
I wasn't able to find the source code for ebla - and noticed it was under the BSD license (which might not require the source to be published) - I saw a jar to download - sometimes distributions package the source in the jar (I'll check on this later)
Here is some more links I have scooped from the internet to add to our growing set of possible toys :
Was just thinking, in light
Was just thinking, in light of all the recent video / computer vision stuff - the quickest way to get involved in the latest InMoov development would be to get 2 servos and a pan-tilt kit like this :
with a web cam on top of it and an Arduino.
Alternatively, you could get a Kinect...
OK, got it. What camera
OK, got it. What camera resolution are we looking for in a webcam? Does hi res slow the process down? I think a hi exposure range or latitude would be important... Chris
Could be anything you have,
Could be anything you have, or anything that's convienent, or anything thats available. MRL is designed to handle a large variety. The OpenCV service (http://myrobotlab.org/service/opencv) has the potential to change resolution, and even if the camera does not support multiple resolutions, the service can add a "PyramidDown" filter - which post processes a video frame to a smaller res.
There are 2 criteria which I typically am more interested in, these are :
1. Frame/data rate - the higher the better always
2. Field of view - most cameras have (relative to the human eye) a very narrow field of view - so its like trying to navigate or identify objects while looking through a toilet paper roll.
I'm a big follower of Subsumption Architecture
I happen to own "Fast, Cheap,
I happen to own "Fast, Cheap, and out of control" on DVD. It features the father of Subsumption Architecture...