|This is an simple "empty" scene, the camera is pointing at the end of a table.||The pixels processed in the Mog2 still change enough so flickering dots flash about. This was under a fluorescent light which might contribute to the noise, however, this can be easily be filtered out when grabbing contours by specifying a minimum value of pixes when identifying an object of interest.|
|A strawberry salt shaker is added to scene.||The Mog2 background subtraction does a nice job of isolating the object. The fact that the table is red does not seem to affect it. The pixel difference is so great that the salt shaker is very clear, but so is the salt shaker's reflection ;P|
|I got the FindContours filter working again (or at least partially) - this is what it finds with certain min / max pixel defintions filled. It has drawn a bounding box around the object & its reflection. As you would expect, it treats the object & its reflection as one object.|
Here is another example with the following filters applied.
Erode and dilate clean up much of the noise. In this case I have moved the ball, and the mog2 filter picks up the object and the "hole" from which the object was moved. The system can tell pretty easily that the ball was moved, in what direction and with some calculations and some assumptions - the distance of movement.
Next it would be nice to extract a template of the object of image - with a mask. This will help in the future for identification of a "pink ball"
Now that the OpenCV service in MRL has been re-designed, it took only 15 minutes to add a brand new OpenCV filter. With the idea that InMoov will want to distinguish objects, I added OpenCV's BackgroundSubtractorMOG2 . The parameters are very simple and it has a "learning" switch. The learning switch tells the subtractor that everything is considered a background or it can be switched to say any new pixel deviation is now a "foreground.
The visual memory COULD possibly include a lot of information. Beginning with the raw image. We should add the FindContours bounding box. But there is so much more info to add.
- position of self & object
- heading of self & object
- distance - base on a variety of strategies (position + heading + perspective
- inferred size
- light source location
- ending or continuing of wall, cieling, & floor planes
- and lots of contextual info
- Mateusz Stankiewicz excellent example