Javadoc link

 

 

 

 

 

Python
 
[[service/OpenCV.py]]
 

The OpenCV Service is a library of vision functions.  
Some of the functions are

  • Face detection 
  • Fast Lucas Kanade optical tracking
  • Background forground separation
  • Motion detection
  • Color segmentation

The OpenCV Service has several dependencies, so it will need to be installed before it is used.  You may install it on the Runtime panel.  Highlight the OpenCV service right-click -> install.  A restart is necessary, after installing new services.

Raspberry PI:

If you want to use OpenCV on a Raspberry PI there is an extra depencency that needs to be installed manually. Open a Terminal window and install with this command:

sudo apt-get install libunicap2

That will make it possible to use USB cameras, and the Kinect camera.

To use a Raspi camera module three more steps are necessary:

1. Open 'Raspberry PI Configuration' => Interfaces => Mark the Camera: checkbox Enabled.

2 . Edit the file /etc/module. Add a line: 

bmc2835-v4l2

3 .Reboot.

 

The same can be achieved programmatically with the following line of Python

runtime.upgrade("org.myrobotlab.service.OpenCV")

Next you can create start a new OpenCV service by right clicking -> start

Starting a new service can be done programmatically with the following line of Python
 
runtime.createAndStart("opencv", "OpenCV")
 
Many of the functions in OpenCV are implemented as pipeline filters in MRL.  That is a filter can run, and the output of one can be connected to the input of another.  Not all filters are capable of "pipelining"
 
How to Add an OpenCV Filter.
Below shows the steps to add the PyramidDown filter:
Step 1 - select from the available filter
Step 2 - is press the arrow button to move it into the pipeline
Step 3 - is to give it a unique name
 
Python :
 

opencv.addFilter("nameOfMyFilter","PyramidDown")

 
 
How to Modify an OpenCV Filter
Some filters have configuration which can be changed.  You must highlight the filter your interested in, and further configuration may up.  Below is the Canny filter low & high threshold and an aperture size which can be changed.
 
Python:
 
opencv.setFilterCFG("canny","aperture", 1)
 
 
How to Remove an OpenCV Filter
Just highlight and press the left arrow button to remove a filter from the pipeline
 
 
The ever popular face detect.
 
Reference

LKOpticalTrack
Lucas Kanade optical tracking will track a selected corner through the video stream.  Load this filter, find a good feature and click the mouse on it.  It will set a point where the mouse was click, as the image moves, the point will move with the object selected.
 
 
References
 

Canny
The Canny filter is used for detecting edges in a video stream.  This can be useful in object segmentation.
The function finds the edges on the input image and marks them in the output image edges using the Canny algorithm. The smallest value between lowThreshold and highThreshold is used for edge linking, the largest value is used to find the initial segments of strong edges.
 
Variables
  • apertureSize - aperture parameter for the Sobel operator 
  • lowThreshold - lower threshold on minimal amount of pixels required for edge linking
  • highThreshold - upper threshold on max amount of pixels required for edge linking
 
References
 

Detector
 
<< need picture here raver !>>
 
The detector uses the OpenCV class BackgroundSubtractorMOG2.
 
Parameters:
  • history  Length of the history.
  • varThreshold   Threshold on the squared Mahalanobis distance to decide whether it is well described by the background model (see Cthr??). This parameter does not affect the background update. A typical value could be 4 sigma, that is, varThreshold=4*4=16; (see Tb??).
  • bShadowDetection   Parameter defining whether shadow detection should be enabled (true or false).
 
References
 
GoodFeaturesToTrack
 
This filter will find prominent corners in a video stream.  After these points are gathered, tracking can be done with the same set of points.  LKOpticalTrack may use this method to initialize a set of good tracking point.
 
References:
 
Flood fill
 
Good Features
 
 
 
And InRange which filters on Hue Saturation and Value.
 
Many of the filters output positional points, which in turn can be consumed by other services in MRL. For example a pan / tilt kit with servos can track the point in Face Detect or LKOptical Track.
 
The addtion or removal of filters can be controlled through the Python script in MRL, such that when a motion appears, a program could remove the motion filter and set an LKOptical tracking point to track new motion.
 
References :

Not all cameras work with OpenCV - here is a list of supported cameras from willowgarage

http://opencv.willowgarage.com/wiki/Welcome/OS

Example code (from branch develop):
#file : OpenCV.py (github)
from org.myrobotlab.image import Util
# start a opencv service
 
opencv = runtime.start("opencv","OpenCV")
#gui.setActiveTab("opencv")
 
# add python as a listener to OpenCV data
# this tells the framework - whenever opencv.publishOpenCVData is invoked
# python.onOpenCVData will get called
python = runtime.start("python","Python")
python.subscribe("opencv", "publishOpenCVData")
 
 
# call back - all data from opencv will come back to 
# this method
def onOpenCVData(data):
  # check for a bounding box
  if data.getBoundingBoxArray() != None:
    for box in data.getBoundingBoxArray():
      print("bounding box", box.x, box.y, box.width, box.height)
 
# to capture from an image on the file system
# opencv.captureFromImageFile("C:\Users\grperry\Desktop\mars.jpg")
 
# not for you, it's for test
if ('virtual' in globals() and virtual):
  opencv.setMinDelay(500)
  opencv.setFrameGrabberType("org.bytedeco.javacv.FFmpegFrameGrabber")
  opencv.setInputSource("file")
  opencv.setInputFileName(Util.getRessourceDir()+"OpenCV/testData/monkeyFace.mp4")
 
opencv.capture()
 
#### LKOpticalTrack ####################
# experiment with Lucas Kanade optical flow/tracking
# adds the filter and one tracking point
 
opencv.addFilter("LKOpticalTrack")
opencv.setDisplayFilter("LKOpticalTrack")
# attempt to set a sample point in the middle 
# of the video stream - you can 
opencv.invokeFilterMethod("LKOpticalTrack","samplePoint", 0.5, 0.5)
sleep(4)
opencv.removeFilters()
 
opencv.addFilter("FaceDetect")
opencv.setDisplayFilter("FaceDetect")
# attempt to set a sample point in the middle 
# of the video stream - you can 
 
sleep(4)
opencv.removeFilters()
 
 
#### PyramidDown ####################
# scale the view down - faster since updating the screen is 
# relatively slow
opencv.addFilter("PyramidDown")
opencv.setDisplayFilter("PyramidDown")
sleep(4)
# adding a second pyramid down filter - we need
# a unique name - so we'll call it PyramidDown2
opencv.addFilter("PyramidDown2","PyramidDown")
opencv.setDisplayFilter("PyramidDown2")
sleep(4)
opencv.removeFilters()
 
 
#### Canny ########################
# adding a canny filter
opencv.addFilter("Canny")
opencv.setDisplayFilter("Canny")
sleep(4)
canny = opencv.getFilter("Canny") 
# changing parameters
canny.apertureSize = 3
canny.lowThreshold = 10.0
canny.highThreshold = 200.0
 
sleep(2)
 
canny.apertureSize = 5
canny.lowThreshold = 10.0
canny.highThreshold = 100.0
 
sleep(4)
opencv.removeFilters()
opencv.stopCapture()

Hi !  Playing with open cv ( inmoov service ) , works great ! Someone know how to change from python : The Haar Cascades file ? ( the dropdown where xml are stored ) .

thank you

 

Can i ask how you got MRL to work with the raspi-camera module? I have been trying this for over a year now and can't get it to work. The camera module just won't start. even though it works fine if i call it with raspivid.

i have the v4l2 drivers installed and in /dev/ i found video0. so that is working fine right?

any help is appreciated ; )

Example configuration (from branch develop):
#file : OpenCV.py (github)
!!org.myrobotlab.service.config.OpenCVConfig
cameraIndex: 0
capturing: false
filters: {
  }
grabberType: OpenCV
inputFile: null
inputSource: camera
listeners: null
nativeViewer: true
peers: null
type: OpenCV
webViewer: false