Inmoov Brain

                               INMOOV BRAIN
                            cognitive processes
 
many people is building the Inmoov open source robot,created by Gael L, and they take care of the mechanical and engineered parts that allow the moving body and gestures.
 
in orer to provide Inmoov the capacity of understanding Human mental processes and experiment himself that kind of understanding and learning, we can try to recreate the typical human cognitive process and adapt it to the bot.
 
it can be a hard work, but if we cannot get any ready made brain, we can build this using aiml, in part, and some software for data analisys, evaluation,and process.
the aim is to get a sort of awareness and consciousness, that allow the bot to realize where he is, who is talking to , differentiate humans and living beeings,bots and things; what is the environment, if he is inside or outside house, if is sunny or clouded, if there is any danger for gas, etc. finally a basic perception of real by his direct experience.
 
                                  the basic process:
1- Perception = we can obtain quite well this, with sensors and data.
2- Sensation = def: Modifications of the system due to external cause. this can be    investigated by the sensors data, with pattern recognition etc.
3- Impression= first hypothesis of reality. greater sensation stregth. this is a new method    and teach real/false truth and more irrational to be investigated.
4-Thinking= creation of ideas, concepts, classes
5-Learning= get knowledge, ability, values. this is done by Aiml,experience, chat,web.
6-reasoning= capacity of evaluating things, logic, math
7-Problem solving= reach desired condition from a given condition...quite hard task.
8-Memory=no problem any hd, cloud or else would do
9-Attention= select only one or few external senses ,ignoring the others. 
10-Language=ok we have it- languages, sounds, gestures
11-Emotions= mental/phisical states associated to internal/external stimuli. 
12-Attitude= interpretation of moods and reproduction. this would be usefull for a human        assistant.
 
                                   1- Perception. 
 From this file all the data can flow into the brain and will be later analized to give a sort of perception of reality, that needs to be trained but it can produce a real awareness and self learning. This is a list of all the sensors that can be connected to the inmoov:
 
1- light sensor = Photoresistor or photovoltaic cells
2-sound sensor= any microphone
3-Temperature sensor= ic sensors like LM34 LM35 TMP35 tmp36 tmp37 etc
4-Contact sensor= switch type or other
5-proximity sensor= PIR sensor, ultrasonic, photoresistor
6-Distance sensors= ultrasonic,infrared distance, encoder stereo camera
7-Pressure sensor= tactile pressure sensors
8-Tilt sensor= usually mercury in glass- agaist tilt and shock
9-Position sensor= GPS, digital magnetic compass, web
10-location sensors= landscape recognition, gps
11- Accelerometer sensor= accelerometer  analog/digital 
12- gyroscope sensor= measure rotation  - to follow direction
13-IMU sensor= inertial measure unit for pitch, yaw,roll relating baricenter
14-Tension sensor= Electric voltage 
15-current sensor=  Electic amperage
16-Rain sensor= water, rain sensor
17-Gas sensor= senses gas for human security
18-Humidity sensor= Humidity.   in case needed.
 
all the process is based on self learning , so the code will not be huge as in the chatbot, but needs just to organize the data space so that the bot can write files,create new, and read them when needed. 
he will write what he learned every day, and this will end in a singular experience depending on how the bot is trained.
Attention:This process has to be loaded on higher level, cause it must control the chatbot before and during speaking, and runs as a resilient program under myrobotlab, thus controlling it.
 
give him the ability to go on the web and do searches for his learning, as also chatting with other bots, i am not sure if he can understand a web page now, or read a pdf, i mean if he can recognize words in a picture etc.
but we can provide him with the sensors and software that let him perform those actions.
it will take some time, but with the help of the community is possible to reach the results.
 and remember that the bots have all the time....!!
 
I am asking you , developers of Myrobotlab, which is the way to implement this kind of system into your program, this is only a sketch and there is much more to add and any help to start this project will be appreciated.
 
thank you
FC
                          

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
kwatters's picture

Memories, you're talkin' about memories...

I recently added support in MyRobotLab to record data and store it in an embedded search engine called "Solr".  The idea is that you can record any data that is published in MyRobotLab and then you can go back an query for it to get that data back.

There's a thread on this here:

http://myrobotlab.org/content/robot-memories

I think a lot of the peices of what you mention exist in MyRobotLab already, it's really just a question of how to organize them all.  

We do have some support for deeplearning training and neural networks in MyRobotLab also with the Deeplearning4j integration.