I have my Raspberry Pi and Picam working now, thanks for all the help on that. I'm now trying to get some simple vision stuff working. I've managed to do face tracking and get the bounding box variables returned to the code using this example:
http://myrobotlab.org/service/OpenCV
However it's a bit heavy for the Raspberry Pi, and essentially I just need optical flow, so the LKOpticalTrack filter looks ideal. I can make it work int he Gui - it tracks points fine and I can speifcy the point to track in the code as per the example also which works fine.
However LKOpticalTrack doesn't return a bounding box, so the code in he example won't return the tracked point variables for use in my code. Does anyone know what the equivalent is for point variables from openCV ?
A replacement for: print
(
"bounding box"
, box.x, box.y, box.width, box.height) -
but for points instead of a box?
I tried this example: https://github.com/MyRobotLab/pyrobotlab/blob/master/toSort/OpenCV.LKOp…
...but it just gives me a bunch of 'PyException null' errors as follows, the example is 4 years old, has anyone had this working since?
I also tried it on a WIndows box with the same results. Error output is:
So, I looked it up in the
So, I looked it up in the source code, you'll want to use data.getPoints() .
This should be documented better ...
Here is a small, working script:
To make processing lighter, you could try downscaling your video stream (e.g. using PyramidDown Filter).
Hi James, i read from a
Hi James, i read from a thesis that someone who wanted to make an optical flow sensor based on the raspberry pi 3. Now it is not completely what you are looking for, BUT, he did mention that he first did some tests with a raspicam and opencv.
He adjusted the parameters for the flow to opencv and changed parameters in opencv, to use grayscale and/or RAW. I had to look it up on my computer since i used it for my px4 flowsensor and my drones to use optical flow tracking of the ground.
https://dspace.cvut.cz/bitstream/handle/10467/67320/F3-DP-2017-Heinrich… is the link to the thesis.
gr. Wilco
Thanks for the code, that
Thanks for the code, that seems to work ok.
The only thing is that I can't seem to set the sample point from the code anymore, so it just returns a lot of PyException Nulls until I go and do it manually - then it returns the X Y values which is great.
Do you have any idea how I can click on 'get features' from the code? - that seems to be the best all round way to track whatever is in the foreground.
Thanks again
"get features" can be
"get features" can be "clicked" by setting the variable filter.needTrackingPoints = True
I don't know if this is the "official" way to do it, but this at least seems to be a nice work-around in the meantime.
This is a complete, working script demonstrating this:
Thanks again, Is this info
Thanks again,
Is this info only available by looking at the source code or is there somewhere all of this is documented?
I found it, cause I know the
I found it, cause I know the structure of the project and therefor know where I need to start looking for it in the source code.
It should be documented, but - well, MRL's documentation has it's good sections & it's bad sections. This one seems to be of the second type.
MaVo, maybe put in here where
MaVo, maybe put in here where you found it etc, so it is documented and can be added to the service, so everything is updated
So I took the time and
So I took the time and collected the links to the specific code lines @GitHub.
I really don't think this is much help, it would be much better to document what functions & variables are accessible (and meant to be accessed!) on each object. We kind of have this already, JavaDoc, ya know? But to make this work, the actual JavaDoc anotations would need to be written.
Anyway, here is getPoints() on OpenCVData, which returns an ArrayList of Point2Dfs. You can use standard Java functions to e.g. get an element from this list using get(int index) or use size() to get the number of point. When you've got a single point, you can get the coordinates of it using get(string).
The function of "get features" can be found out when looking at the event handler section of the OpenCVFilterLKOpticalTrack - Swing gui. Now we just need to get a reference to the filter, luckily OpenCV.addFilter(...) returns just that!
Thanks again for this. I'm
Thanks again for this. I'm now having trouble with :
opencv.invokeFilterMethod("LKOpticalTrack","samplePoint", 0.5, 0.5)
Which doesn't seem to work at all now. I went through the source code and found what I think is the mouse click code at the bottom of: https://github.com/MyRobotLab/myrobotlab/blob/develop/src/main/java/org…
Ultimately I want to be able to hold an item infront of the camera, trigger it to set a point in the middle of the image from an input (probably webkit speech recognition), and then track it outputting the coordinates to pan the camera.
'Get features' kind of works, but it returns an arbitrary amount of points, and some of them are stationary in the background. I guess I could take an average of them all or something, but I single point would work best if I could set it in the code.
thanks!
Updated LK tracking example script
Hi James,
Here's an updated example that I think (hope) will do exactly what you're looking for.
https://github.com/MyRobotLab/pyrobotlab/blob/develop/home/kwatters/Ope…
Good luck!
-Kevin
P.S. I should also mention, that I only tested this on the develop branch. Syntax shouldn't have changed in a while.. but the opencv.height / opencv.width properties were exposed/added only a few weeks ago.. so make sure you're using a recent build, things should be smoother that way.
Thanks Kevin, I'll give that
Thanks Kevin, I'll give that a go shortly.