I am trying to make faceDetect working. My camera delivers a rotated picture. Found the Transform filter which gives me an upright picture.

part of my script:

headTracking = i01.startHeadTracking(leftPort)
eyesTracking = i01.startEyesTracking(leftPort)
i01.opencv.addFilter("Transpose")
i01.headTracking.faceDetect()
i01.eyesTracking.faceDetect()

when I go to the opencv page the Transpose Filter is not in place. Where do I have to place the addFilter command?

swing gui, opencv tab. In the web gui I can not see the list of applied filters and also see no way to add them manually.

The swing gui has also a problem with the filter list. Trying to remove a filter manually from the list which is not the last added item leaves the dialog in an unusable state.

So somewhere a pyramid down and gray filter are placed before the FaceDetect. For my cam I assume I need to add a Transpose Filter just before the FaceDetect as "face detect" requests a "hair on top" image?

Alessandruino

8 years 3 months ago

You get any error when you launch the script juerg??

BTW i guess Transpose should be applied after the faceDetect method... cause FaceDetect remove all the filters and put his own (pyramid, faceDetection and gray? )...

Another filter you can use is the Affine filter... you can decide an angle to rotate the image...

Ale

Hi Ale

Thanks for jumping in.  I do not get errors in the log.

I tried to follow the code in java and saw that default filters (PyramidDown and Gray) are set up.

Tried then to create a python function that uses opencv methods to set my filter sequence (opencv.addFilter).

As a result I see the filters in the opencv tab and I also get a rectangle over my face. However it looks like this way I do not get commands sent to the servos.

Maybe you know better but with my rotated pictures I could not get the rectangle to show up. So I thought the FaceDetect Filter needs an upside image as input for finding eyes and mouth.

My python function so far:

def trackHumans():
     headTracking = i01.startHeadTracking(leftPort)
     i01.opencv.removeFilters()
     i01.opencv.addFilter("PyramidDown")
     i01.opencv.addFilter("Gray")
     i01.opencv.addFilter("T1", "Transpose")
     i01.opencv.addFilter("T2", "Transpose")
     i01.opencv.addFilter("T3", "Transpose")
     i01.opencv.addFilter("FaceDetect")
     i01.opencv.setDisplayFilter("FaceDetect")
     i01.opencv.capture()
 
so maybe I need a publishOpenCVData in addition? Could also not find out how to set the PID values through the opencv handle.

Use the Affine filter instead... you can set an angle of rotation so you don't have to apply multiple filters...

Ya... you need to set the filter as a prefilter...cause if facedetect isn t the latest filter applied it doesn t send data to servos....

 

Here is a script i wrote and tested :

 

from org.myrobotlab.opencv import OpenCVFilterAffine

 

affine = OpenCVFilterAffine("affine")

affine.setAngle(180.0)

 

leftPort= "/dev/cu.wchusbserial1450" 

i01 = Runtime.start("i01","InMoov")

headTracking = i01.startHeadTracking(leftPort)

eyesTracking = i01.startEyesTracking(leftPort,10,12)

 

i01.headTracking.addPreFilter(affine)

i01.eyesTracking.addPreFilter(affine)

 

sleep(1)

 

i01.headTracking.faceDetect()

i01.eyesTracking.faceDetect()

As a side commentary. the Affine transform will rotate & translate an image by setting the  angle , dx and dy properties on it.

The transpose filter will actually change the dimensions of the image.  So if you were 640x480 before, after the transpose filter your new resolution will be 480x640 ... 

 

Hi Ale - magic

Get my face upright now with affine(90).

Tried first with head tracking only and it works on the x-axis.

The neck servo does not move. If I duck it loses me? I can move the neck manually with the slider.

Tried to set the PID values but that throws an error

i01.headTracking.ypid.setPID(15.0, 5.0, 0.1)

 

 

Now we use PID2 service.. which can handle multiple PID.... so you need to specify in the first parameter if you are refering to x or y for each tracking...

Use this instead : 

i01.headTracking.pid.setPID("y",15.0, 5.0, 0.1)

that worked so far as it does not throw an error but neck ist very slow in response and it's easy to move out of focus. Up/down is slow in reaction, e.g. If I have my mouth at the lower image border the neck is not moving to center my face again.

which of the 3 values will give a quicker response for the servo movements? I currently use 15.0, 5.0, 0.1.

Hi Juerg

The order of the values is P,I,D

P = Proportional gain. It is used to calculate how far the servo should move when your face isn't in the middle of the screen. This error can be between -0.5 and 0.5. The P gain is dependant of how far away from the camera you are, but lets say that you are at a distance so that when the error is .1 the head should turn 10 degrees. Then you need a P gain of 100. To find a realistic value for P, set both I and D = 0 and try different values for P.

I = Integral gain. As you can understand from the explanation above, as soon as your face is in the middle again, the error will be 0, so the servo will turn back to it's original position. Doh. The integral part will accumulate errors over time ( Integration) so it will compensate for that. What value you should have is depending on how often the PID samples. But generally speaking it should be less that the P value, but how much is difficult to tell. Start with 1/100 of the P value and increase if it undercompensates, and decrease if the head starts to shake. ( That can also happen if you have a to high P gain )  

D = Derivate gain ( This is the opposit to I). This component reacts to fast changes, So if you move your head quick, it shoud make the servos move faster than if you move your head slowly. This parameter is the most difficult to set. Either leave it a 0 or try to tune a small portion at the time.

/Mats 

 

 

juerg

8 years 3 months ago

In reply to by Mats

Thanks Mats for all the information. Can see some improvement but it's still easy to get lost. Maybe my framerate (8..12) is too low?