Hi guys, I have some problem, I try to use Tracking Service but not work, Azure Translator but also it not work... 

I have an urgent need to make them efficient, if you tell me the problems I can try to solve these

Thanks in advance

The OpenCV in Tracking Service not show image in macOS and also in Windows, but in macOS the light of webcam turn on. I have lastest version, 2180

I try with VideoInput and also OpenCV but in the first I receive this error: 

videoInput.getPixels() Error: Could not get pixel 

and in the OpenCV start the camera (turn on the light) but the view is all black

Hi Papa,  you should provide a bit more detail. Is there an error message?  what script are you using?  Is it checked into github pyrobotlab repo ?  Have you tried the other frame grabbers such as the Sarxos grabber? The Sarxos grabber works in more environments than pretty much every other grabber that we have in the system.

-Kevin

Papaouitai

7 years 6 months ago

In reply to by kwatters

Hi kwatters, thanks for answer. I use this script:

xPin = 3
yPin = 4
arduinoPort = "COM7"
cameraIndex = 1
x =Runtime.createAndStart("tracker.x", "Servo")
y = Runtime.createAndStart("tracker.y", "Servo")
x.setPin(xPin)
x.setVelocity(-1)
y.setPin(yPin)
y.setVelocity(-1)
controller = Runtime.createAndStart("tracker.controller", "Arduino")
controller.connect(arduinoPort)
tracker = Runtime.createAndStart("tracker", "Tracking")
opencv = Runtime.start("opencv","OpenCV")
opencv.setCameraIndex(cameraIndex)
tracker.attach(opencv)
opencv.capture()
tracker.startLKTracking()

 

I'm taking a look at the no worky here:  http://myrobotlab.org/myrobotlab_log/upload/Papaouitai/1495498498.myrob…

the above script is not the one that you ran to generate that no worky..

It seems like you executed the following code:

webgui=Runtime.create("WebGui","WebGui")
webgui.autoStartBrowser(False)
webgui.startService()

#start speech recognition and AI
wksr=Runtime.createAndStart("webkitspeechrecognition","WebkitSpeechRecognition")
pinocchio = Runtime.createAndStart("pinocchio", "ProgramAB")
pinocchio.startSession("default", "pinocchio")
htmlfilter=Runtime.createAndStart("htmlfilter","HtmlFilter")
mouth=Runtime.createAndStart("i01.mouth","AcapelaSpeech")
#wksr.addTextListener(pinocchio)
wksr.addListener("publishText","python","heard")
pinocchio.addTextListener(htmlfilter)
htmlfilter.addTextListener(mouth)

opencv=Runtime.start("opencv","OpenCV")
opencv.setCameraIndex(0)
opencv.capture()
fr=opencv.addFilter("FaceRecognizer")
opencv.setDisplayFilter("FaceRecognizer")
fr.train()# it takes some time to train and be able to recognize face

def heard(data):
	lastName=fr.getLastRecognizedName()
	if((lastName+"-pinocchio" not in pinocchio.getSessionNames())):
		mouth.speak("Hello "+lastName)
		sleep(2)

 

 

that being said.. i'm still a little confused because the line number for the exception in your no worky does not line up to what I would expect in the source code..  

 

I see the following exception:

java.lang.NullPointerException
	at org.myrobotlab.opencv.OpenCVFilter.invoke(OpenCVFilter.java:114)
	at org.myrobotlab.opencv.OpenCVFilterFaceRecognizer.process(OpenCVFilterFaceRecognizer.java:431)
	at org.myrobotlab.opencv.VideoProcessor.run(VideoProcessor.java:502)
	at java.lang.Thread.run(Unknown Source)
------

 

but from what I can tell, the line in quesiton should not be throwing a null pointer exception..  only thing I could think is that the video processor was null.  I know Grog was refactoring some code there..

Perhaps his refactoring broke this?

 

 

 

So the noWorky script posted by Papa - did not result in an NPE.
Although its more than just "Tracking" - Face Recognition wasn't part of Tracking' face detection.
Not sure what Papa is trying to do, but I don't think my refactoring on a seperate branch has caused the issues reported ..

I just checked in a chance that I think will fix this error for you  Papa.  Try out the latest build , it should be worky for the error you sent last.

longer answer.

The issue was that in order for a filter to publish data, it's invoking a method on the OpenCV service to publish that data via the MRL framework.  The only way a filter can get to the OpenCV service that it's attached to is through the VideoProcessor.  The VideoProcesor wasn't being specific on the filter when it was attached to the opencv service.  This change makes sure that the video processor is set on the filter when it's added to the opencv service.

I've done this script, it works but the servos don't move correctly

 

xPin = 2

yPin = 3

arduinoPort = "COM7"

cameraIndex = 1

controller = Runtime.createAndStart("tracker.controller", "Arduino")

controller.connect(arduinoPort)

x =Runtime.createAndStart("tracker.x", "Servo")

y = Runtime.createAndStart("tracker.y", "Servo")

x.attach(controller, xPin)

y.attach(controller, yPin)

tracker = Runtime.createAndStart("tracker", "Tracking")

tracker.attach(x, "x")

tracker.attach(y, "y")

opencv = Runtime.start("opencv","OpenCV")

opencv.setCameraIndex(cameraIndex)

tracker.attach(opencv)

opencv.capture()

tracker.faceDetect()

ShaunHolt

7 years 6 months ago

In reply to by Papaouitai

What exactly are the servos doing? Moving slowly, moving in wrong direction, not moving enough?

Have you made sure your settings for how far the servos can move are set correctly?

If not moving enough it makes make think that maybe its seeing somewhere that its range of movement is set for a certain amount of movement.

Does it move say, equal distance left and right... up and down... does it move further to the left than it does to the right?