So this might be not related much to MyRobotLab, but I'm interested in how the face is track after detection by using OpenCV, anyone have the fundamental knowledge in this services? I mean, like, how the servo is controlled to make sure the face detected will always located at the center etc.
Tracking...
There are a few different technologies involved with face tracking in MyRobotLab (or any other system.)
1. camera
2. some servos , likely one for the pitch and one for yaw. (x and y axis)
3. a control loop
Ok, let's talk a bit more about how those work together. The camera is pretty straight forward, a normal webcam takes pictures or frames and those pictures are passed through a software package called OpenCV. (Other software can do this, we use OpenCV). OpenCV passes the frame through a face detection algorithm. There are many face detection algorithms, one that we commonly use is a HaarCascades classifier. There are also deep neural network implementations but those tend to require more cpu/gpu resources. The result is that if a facet is detected, the algorithm will provide us with a bounding box for that face. The center of the bounding box is generally the center of the face that was detected.
Next, there are servos, 2 servos at the base of the camera. One controls the left/right movement (Also called yaw) and the other controls the up/down movement (also called pitch) of the camera.
Lastly there is a control loop. The control loop is responsible for looking at the output of the camera and then computing a change in position for the servos. The control look that is typically (but not always used) is called a PID control loop. These loops have a "set point". This is the value that the control loop will attempt to optimize for. In the tracking case, we set the "set point" to be in the center of the image. ( 0.5, 0.5 )
When the camera takes a picture and opencv detects a face, the position of the detected face is known. Perhaps it comes in a little to the left with a avlue of ( 0.2 , 0.5 ) . This value is feed into the control loop algorithm that realizes that the x value (0.2) is less than the desired x value of 0.5. So, it sends a signal to the servos to move a little to the left so as to move the detected face into the center of the image. The amount that the servo moves is porportional to the difference between the current position and the desired position. This loop runs over and over very quickly and the net result is that the camera will be moved by the servos to track someones face.
simple enough
good luck!
Belated, thanks !
Got it ! Thanks for the reply thou, I really appreacite it.