So the general question is:

What should your robot remember?  What should your robot recall?  How do you want to access their memories?

More details to follow...

 

Teaser:  We will embed a "core" to store robot memories.  I think these memories probably include things like.

1. what did I hear?

2. what did I say?

3. what did I see?

4. what did I feel/taste?

5. what did I smell?

 

We can establish a common storage format for all of this and it can be shared!

Once we start sharing those memories, we might start thinking about things like , "who do I know?" ... "what do I know?"  ... 

 

AutonomicPerfe…

6 years 1 month ago

Interesting idea, kwatters. I would think we should probably use a database to store these memories, access is much faster that way and I would imagine we would be writing lots of data. I wonder, if we could agregate the memories together, could we build a virtual environment from the data? Kinda like SLAM, but also containing properties of the surfaces encountered, locations of people last spoken to, noise profiles of different rooms, etc. As well as locations of interest, such as the fridge, and procedures to use items. The possibilities are endless! A virtual environment in sync with realtime sensor data as well as memories would allow a robot to predict what will happen given previous input as well, such as a vase will break if it falls. Tying memories into deep learning networks could give this sort of predictive capabiliy.

Oh, I'm excited for this! :)

Can't wait to see what we can do with proper memory!

Is it possible to tie a neural network into an SQL database?

The possibilities if this is possible are limitless.

Data replication across databases is relatively straight forward, with the data store then accessible by a neural network.....

This is an exciting prospect.

 

moz4r

6 years 1 month ago

Hi ! Exciting stuff! ! At this time I use aiml mechanism.

First I decompose knowledge into 2 things

-       Personal memories ( ex : what is the name of my mother )
This is predicates mechanism used, per session ,  inside user.predicates.txt
Personnal particles ( my etc … ) are identified inside a set : https://github.com/MyRobotLab/inmoov/blob/master/InMoov/chatbot/bots/fr…

-       Global knowledge ( ex : what is a beer )
-> online services search -> ask for the answer -> store inside local learn.aiml.csv

Don’t know where to store « instant memories » , like « what did I smell? » , maybe inside predicates too ?
We need also a thing to share global knowledge for other bots.

Yes,  a common storage format is necessary ! Wahoo huge thing. do you have some ideas to enhance things ? 

 

Thanks for the comments everyone.  

I know there is some desire for a database with SQL capabilities. and we could consdier something like that, however databases don't scale nearly as well as search engines.  

If you don't believe that statement , just ask your self when is the last time you "oracled" for something... you don't.  you "google" for something.

So, we have some minor support for Solr (an open source search engine that has been around for over 10 years.) however, it requires that an external solr instance is setup.  That's not super, so the plan is to embed a solr server inside MRL itself.  

One big question is how the "attach" pattern is going to work for this.  For example, if I want to attach speech synthesis to solr, what does that mean, if I want to attach speech recognition to solr, what does that mean, if I attach programab to solr, what does that mean, if I attach opencv to solr , what does that mean...

Once the embedded solr server is borged in, we'll want to come up with a grand unified schema for all the objects that we pass around and want to store..  This is something that's been on my TODO list for quite some time and now that we're updating our dependency management, it will become much easier to do it.

I think the first use case that I'd like to implement for this new Solr  (memory) server is the ability to store training images for facial recognition.  After that, we can layer in the rest of the other senses.

In the longer run, I'd like t the Solr instance to be a place where the robot could store news about the days events, so you could ask for things like   "what's the trending topic in todays news?"   and it would be able to come up with answers based on actual real world events.  

For more info about solr have a look here:

http://lucene.apache.org/solr/

 

So the search engine will find the stored memory, but how do we store the memory in the first place.

Most search engines are word based searches, I'm not sure how you can use that for searching a sound unless we save the sound with a word based tag.

The base of MRL is java, so a jason file may be the best format to store the data in.

That way you can use the same schema to store multiple types of data in.

Loading the data into a neaural network could be an interesting prospect.

but also saving the images and sounds from the days events and using that to help train the neural net overnight could also be very advantagous.

 

Maybe short term and long term memory short term being held for a set amount of time and long term being permenant would save space

 

kwatters

6 years ago

disclaimer, this is all on develop branch currently.

So, lot's of progress this weekend.  now the solr service can start an embedded solr instance, that has a managed schema (this might change), you can attach a bunch of services to it such as webkit/sphinx recognition, programab, opencv, and deeplearning4j.  

The solr service subscribes to methods on the services that it's attached to.  It then implements the parsing of that data and adding it to the search engine.  The net effect of this is that we can use solr to build a lucene index of messages that flow within MRL.  One possibility is a record and playback sort of thing.  Another is letting the robot know what it's seen lately and be able to answer questions like  "Where are my keys?!"..  Being able to remember conversations, and how the robot reacted in those conversations will be very useful as a general sort of data service in mrl.

I'd like us to , as a group, talk about an define a common set of memory objects that we will persist and subsequently be able to search for the robot.  I think some initial ones that make a lot of sense are the following

  • record video from opencv  (at least some of the frames)
  • record what deeplearning4j models have classified from the input video frames (or other data)
  • record what was recognized by speech recognition (webkit/sphinx)
  • record what the robot responded via ProgramAB
  • record the timestamps and the angles that any servo was told to move to.

I haven't yet implemented motor or servo movement requests..  We will need to make sure that various service based interfaces also implement the getName() method, otherwise, it will be really difficult to make sense of the data in the index.

Ultimately, we just need to support objects that represent the data we want to capture and recall..  But ,  I think this will be very handy, potentially we could use a joystick to tell the robot to move to a pose, and then ask it to play back that sequence of movements...  

jeffrschneider

6 years ago

Kevin - "memory" is *very* complicated. I appreciate the SOLR/Lucene approach to simplify, but that will push the details to the metadata, which is fine - it just means you will need to put more thought in the format. I hate to go old school on you... but I'll suggest you write a mini-specification on this. I'm happy to collaborate with you on it. There are some best practices available. Some are overly complex but the good stuff can get distilled (event markup langagues).