https://www.youtube.com/watch?v=Dqa-3N8VZbw
Hi guys. I would like to implement the function as shown in the URL attached, which is a emotion detect function. I wish to implement this when Face Detection in python script is executed in MyRobotLab, so that the emotion status will be displayed also when the face is detect, anyone have any idea how should I do so?
The tracking script used is Tracking.py.
By the way, when tracking function is activated via Voice Command, I wonder which file and script Inmoov will refer to?
Hello Kelvin, I didn't see
Hello Kelvin,
I didn't see the end of the video you references. However, I know he uses OpenCV and TensorFlow - this is probably a similar work process - http://www.paulvangent.com/2016/04/01/emotion-recognition-with-python-opencv-and-a-face-dataset/
So, in general you need OpenCV + a training set + TensorFlow.
You need to train a model.
You need to run the model.
You need face segmentation out of a live video stream to submit to the model, and the model to report the appropriate label (Sad, Happy, Calm, Angry, etc.)
Sounds easy no ? Many people have worked on all parts of the building blocks of this.
For MRL, you'd probably be using DL4J (deeplearningforJ) instead of tensor flow - but the general process holds true.
As a "developer" I'd want to create another MRL OpenCV Filter ... like Emotion filter which did all these things and published an emotion based on previous training.
That is what needs to be done in order for me "As a MRL user"
to do the following in Python
python.subscribe("cv","publishOpenCvData")
def onOpenCVData(data):
emotion = data.get("emotion")
if (emotion == "happy"){
mouth.speak("Hi, I'm glad your happy")
} elseif (emotion == "sad"){
mouth.speak("why are you sad?")
} ...
Still need some advice..
These are the questions from me:
1. So it seems like MRL does not support Tensorflow services right?
2. Is there any tutorial on using deeplearningforJ in MRL to train the emotion? Or it should be execute outside MRL then import the trained data into MyRobotLab? How we import it into MRL service ?
3. Does it means that I only can execute either 1 service in a time (Detect+Tracking Face or Detect Emotion)? How do we create an Emotion FIlter?
4. Actually I have a doubt in your statement of "As a Developer" and "As a MRL user", hha.
5. The code above is actually the function I wish to implement to Inmoov via MRL, it's actually my University Project, but I'm in stuck on how to actually implement it. What I wish to do is that Inmoov will detect emotion and react (Certain gesture and speech) on it, for example, if the User shows a Mad face, then Inmoov will positioned a Sad gesture along with some speech, it's like a two way communication between human and robot.
I'm looking forward for your reply, GroG.
1. No.. it supports DL4J,
1. No.. it supports DL4J, which in turns can be used to "import" models trained by other frameworks. DL4J has its own capability of training too. You can learn more about it here - https://deeplearning4j.org/
2. http://myrobotlab.org/service/Deeplearning4j and of course there is google
https://www.google.com/search?q=dl4j+myrobotlab&rlz=1C1SQJL_enUS865US865&oq=dl4j+myrobotlab&aqs=chrome..69i57j69i60l3.4823j0j7&sourceid=chrome&ie=UTF-8
3. Services are different than filters. In MRL OpenCV is a "service" it has many "filters". Filters are developed code which operates agains a video stream frame by frame. There is currently a DL4J Filter. It operates on a model someone trained a while ago (not sure if it was kwatters or he imported it from somewhere else). Kwatters has trained a model during "runtime" at one point demonstraiting the capability of a Recognition Filter. http://myrobotlab.org/content/face-recognition-opencv-mrl No one at this point has created an Emotion Recognizer Filter.
4. My point was, you don't need to be a developer to use MRL. We work at designing it to be easier for people interested in robots to possibly create systems without writing code. The Emotion Filter for OpenCV has not been created, so some developer will need to create it.
5. Sounds cool. I think if you look at what kwatters has done, its pretty much the same thing, except the model would be trained on recognizing "emotion" vs recognizing a specific face. The pieces are there, just needs assembly and testing and time ;)
There's also a Emoticon service and a FiniteStateMachine service which perhaps "might" be of use to you.
http://myrobotlab.org/content/making-zimbo-emotion-emoji-little-self-reflection
http://myrobotlab.org/content/emoji-service-preview
Good Luck !
I wonder...
This is my short term idea for emotion detect.
1. I implement a 2nd Video Cam on Inmoov and run for emotion detect outside MRL (Using OpenCV+TensorFlow+etc).
2. The detected emotion is writeen in to a file, if emotion detect "Happy", "Happy" will be written.
3. I create a voice command for Inmoov, maybe, "Capture emotion".
4. MRL will read the file content for the result from the file location.
5. Then undergoes IF-ELSE statement, to display certain gesture and speech.
Do you think this is workable?
Hello Kelvin, You wouldn't
Hello Kelvin,
You wouldn't require to process the first part outside MRL because all could be done within with OpenCV + DL4J.
That being said, don't ask me how, because I have never been able to use DL4J yet. There is no tutorial or example script to learn how it works. You would need to search your way through in java land, here is the link for Manticore version:
https://github.com/MyRobotLab/myrobotlab/blob/master/src/org/myrobotlab/service/Deeplearning4j.java
Hello Grog, you mixed a bit
Hello Grog, you mixed a bit java and python in your example.
I think this should be more adapted for python
python.subscribe("cv","publishOpenCvData")
def onOpenCVData(data):
emotion = data.get("emotion")
if (emotion == "happy"):
mouth.speak("Hi, I'm glad your happy")
else:
if (emotion == "sad"):
mouth.speak("why are you sad?")
Heh thanks, Ya you can hear
Heh thanks,
Ya you can hear my Java accent ;)
this link could be of some
this link could be of some help
https://github.com/isseu/emotion-recognition-neural-networks