References
A little closer, here is a PhotoReel in MRL - you can see the cropped images in Memory - at the location /present/faces/unknown/face1 . Now that the data is in memory, it can be associated with a word .e.g GroG and moved to /present/faces/known/GroG
New faces will be compared with templates of known faces so that they can be identified. On question I have is this is a set of 30 - but many hundreds of frames are captured, which should it save? The ones with the greatest diversity? Or is that asking for matching errors ?
Here are the scary results of the new improved OpenCVData - it's a series of cropped faces from a facedetect run. A step closer to allowing detection or identification (TLD+). The general idea would be to load these images in a array and run a matching template or bag of words pipeline and for future frames look for matches. It would be nice to limit the array and to order it some way...
These are the results of FaceDetect - another strategy is to collect data from around a LKOptical point - this is what I will add to the LKOptical filter next.
Looking at some of the other detection cascades
Left eye happend to be a great place to set the LKPoint - since an eye is usually a very good trackin point. It selected the left eye, but got it wrong often.
Pair of eyes was much more consistent - it rarely got it wrong, but it was more fragile - if one of the eyes is hidden in a profile - the detect get any data.