Sparse Feature Optical Flow

I've added "OpticalFlow" filter to OpenCV (worke branch), and here seeing throuhg work-e's kinect eye - you can see worke-e turning left ... or was it right ? :) 

So to make this useful for collision avoidance, some work will need to be put into it.  Like describing the direction vector.  Also some semi-intelligent pruning of data. 

It starts by getting good features to track. Which itself has a few parameters to tweak.

When it has 2 frames (maybe I'll have options of many more?) it finds the difference between the two points and draws a line between the dots.

The point of this is to use it as a "visual encoder" to get information regarding heading and movement, and potentially collision avoidance.

Lots of work to make it useful ...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
kwatters's picture

Moving dots..

Very cool progress!  So, this is , as you put it, a sort of visual/optical encoder.  I'm curious what the jump looks like to go from this to obstical avoidance and motion planning.  

I'm also curious if you could detect regions where objects are moving and regions where objects are standing still..  That seems like it might be interesting to guess if something is going to jump out in front of you.


moz4r's picture

this is magic ?

If I understood you managed to superpose opencv with the clouds of points of the kinect ?

GroG's picture

No .. but I do have a goal to

No .. but I do have a goal to "mesh" the kinect depth data.

This is a different technique, which uses "only" optical information and no depth from the kinect.
But if you can track points in the optical images - and move at the same time, potentially you can calculate depth.  If you can calculate the depth of the points, you could create a 3D mesh.

One of the challenges with optical is that its passive (unlike the kinect ir projector & depth camera) .. so there is no pre-calibrated projections of points everywhere.  

Instead we use another function called GetGoodFeaturesToTrack.  They consist of "corners" of stuff 

Then we "move the camera"
When that happens the dots will move slightly,
And if we know how much the camera moved, we can calculate the distance to all the points without a Kinect !  

Which would be useful outside, as the kinect does not work very well outdoors.