I have my Raspberry Pi and Picam working now, thanks for all the help on that. I'm now trying to get some simple vision stuff working. I've managed to do face tracking and get the bounding box variables returned to the code using this example:

http://myrobotlab.org/service/OpenCV

However it's a bit heavy for the Raspberry Pi, and essentially I just need optical flow, so the LKOpticalTrack filter looks ideal. I can make it work int he Gui - it tracks points fine and I can speifcy the point to track in the code as per the example also which works fine.

However LKOpticalTrack doesn't return a bounding box, so the code in he example won't return the tracked point variables for use in my code. Does anyone know what the equivalent is for point variables from openCV ?

A replacement for: print("bounding box", box.x, box.y, box.width, box.height) - but for points instead of a box?

I tried this example: https://github.com/MyRobotLab/pyrobotlab/blob/master/toSort/OpenCV.LKOp…

...but it just gives me a bunch of 'PyException null' errors as follows, the example is 4 years old, has anyone had this working since?

I also tried it on a WIndows box with the same results. Error output is:

 

[python.input] [ERROR] python error PyException null
[python.input] [ERROR] ------
Traceback (most recent call last):
  File "<script>", line 1, in <module>
TypeError: input() takes no arguments (1 given)
 
at org.python.core.Py.TypeError(Py.java:265)
at org.python.core.PyBaseCode.call(PyBaseCode.java:301)
at org.python.core.PyBaseCode.call(PyBaseCode.java:132)
at org.python.core.PyFunction.__call__(PyFunction.java:413)
at org.python.pycode._pyx5.f$0(<script>:1)
at org.python.pycode._pyx5.call_function(<script>)
at org.python.core.PyTableCode.call(PyTableCode.java:171)
at org.python.core.PyCode.call(PyCode.java:18)
at org.python.core.Py.runCode(Py.java:1614)
at org.python.core.Py.exec(Py.java:1658)
at org.python.util.PythonInterpreter.exec(PythonInterpreter.java:276)
at org.myrobotlab.service.Python$InputQueueThread.run(Python.java:119)
 
 
Thanks!
 

MaVo

6 years 2 months ago

So, I looked it up in the source code, you'll want to use data.getPoints() .

This should be documented better ...

Here is a small, working script:

from org.myrobotlab.image import Util

# start a opencv service
opencv = Runtime.start("opencv","OpenCV")

# add python as a listener to OpenCV data
# this tells the framework - whenever opencv.publishOpenCVData is invoked
# python.onOpenCVData will get called
python = Runtime.start("python","Python")
python.subscribe("opencv", "publishOpenCVData")

# call back - all data from opencv will come back to
# this method
def onOpenCVData(data):
  # all found points
  #print data.getPoints()
  # find out how many points there are (e.g. to loop over them)
  #print data.getPoints().size()
  # get one (the first) point & access it's coordinates
  point = data.getPoints().get(0)
  print point.get("x"), point.get("y")

# start capture (e.g. webcam will start filming)
opencv.capture()

# add LKOptical Track Filter
# will attempt to track points in the moving (video) stream
opencv.addFilter("LKOpticalTrack")
opencv.setDisplayFilter("LKOpticalTrack")

To make processing lighter, you could try downscaling your video stream (e.g. using PyramidDown Filter).

Hi James, i read from a thesis that someone who wanted to make an optical flow sensor based on the raspberry pi 3. Now it is not completely what you are looking for, BUT, he did mention that he first did some tests with a raspicam and opencv. 

He adjusted the parameters for the flow to opencv and changed parameters in opencv, to use grayscale and/or RAW. I had to look it up on my computer since i used it for my px4 flowsensor and my drones to use optical flow tracking of the ground.

https://dspace.cvut.cz/bitstream/handle/10467/67320/F3-DP-2017-Heinrich… is the link to the thesis.

 

gr. Wilco

Thanks for the code, that seems to work ok.

The only thing is that I can't seem to set the sample point from the code anymore, so it just returns a lot of PyException Nulls until I go and do it manually - then it returns the X Y values which is great.

Do you have any idea how I can click on 'get features' from the code? - that seems to be the best all round way to track whatever is in the foreground.

Thanks again

"get features" can be "clicked" by setting the variable filter.needTrackingPoints = True

I don't know if this is the "official" way to do it, but this at least seems to be a nice work-around in the meantime.

This is a complete, working script demonstrating this:

from org.myrobotlab.image import Util

# start a opencv service
opencv = Runtime.start("opencv","OpenCV")

# add python as a listener to OpenCV data
# this tells the framework - whenever opencv.publishOpenCVData is invoked
# python.onOpenCVData will get called
python = Runtime.start("python","Python")
python.subscribe("opencv", "publishOpenCVData")

# call back - all data from opencv will come back to
# this method
def onOpenCVData(data):
  # all found points
  print data.getPoints()
  # find out how many points there are (e.g. to loop over them)
  #print data.getPoints().size()
  # get one (the first) point & access it's coordinates
  #point = data.getPoints().get(0)
  #print point.get("x"), point.get("y")

# start capture (e.g. webcam will start filming)
opencv.capture()

# add LKOptical Track Filter
# will attempt to track points in the moving (video) stream
filter = opencv.addFilter("LKOpticalTrack")
opencv.setDisplayFilter("LKOpticalTrack")

# emulate "clicking" "get features" in the gui
filter.needTrackingPoints = True

I found it, cause I know the structure of the project and therefor know where I need to start looking for it in the source code.

It should be documented, but  - well, MRL's documentation has it's good sections & it's bad sections. This one seems to be of the second type.

So I took the time and collected the links to the specific code lines @GitHub.

I really don't think this is much help, it would be much better to document what functions & variables are accessible (and meant to be accessed!) on each object. We kind of have this already, JavaDoc, ya know? But to make this work, the actual JavaDoc anotations would need to be written.

Anyway, here is getPoints() on OpenCVData, which returns an ArrayList of Point2Dfs. You can use standard Java functions to e.g. get an element from this list using get(int index) or use size() to get the number of point. When you've got a single point, you can get the coordinates of it using get(string).

The function of "get features" can be found out when looking at the event handler section of the OpenCVFilterLKOpticalTrack - Swing gui. Now we just need to get a reference to the filter, luckily OpenCV.addFilter(...) returns just that!

Thanks again for this. I'm now having trouble with :

opencv.invokeFilterMethod("LKOpticalTrack","samplePoint", 0.5, 0.5)

Which doesn't seem to work at all now. I went through the source code and found what I think is the mouse click code at the bottom of: https://github.com/MyRobotLab/myrobotlab/blob/develop/src/main/java/org…

Ultimately I want to be able to hold an item infront of the camera, trigger it to set a point in the middle of the image from an input (probably webkit speech recognition), and then track it outputting the coordinates to pan the camera.

'Get features' kind of works, but it returns an arbitrary amount of points, and some of them are stationary in the background. I guess I could take an average of them all or something, but I single point would work best if I could set it in the code.

 

thanks!

 

Hi James,  

  Here's an updated example that I think (hope) will do exactly what you're looking for.

https://github.com/MyRobotLab/pyrobotlab/blob/develop/home/kwatters/Ope…

Good luck!

  -Kevin

P.S.  I should also mention, that I only tested this on the develop branch.  Syntax shouldn't have changed in a while.. but the opencv.height / opencv.width properties were exposed/added only a few weeks ago.. so make sure you're using a recent build, things should be smoother that way.