How can InMoov utilize "Deep Learning"

I would like my inmoov to never have to be touched by a human. I am trying to come up with a way to incorporate the body language we all put out to be interpolated  and respond to via actuation. So if i walk in the room with my head down and my voice is slower or sad, he would ask "what's wrong"? he should also be able to build on these experiences and log them for future development. basically he would need to write his own code.