I find several difficulties

Hi guys, I have some problem, I try to use Tracking Service but not work, Azure Translator but also it not work... 

I have an urgent need to make them efficient, if you tell me the problems I can try to solve these

Thanks in advance


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
ShaunHolt's picture

What exactly are the problems

What exactly are the problems your having? What MRL version? What OS?

Papaouitai's picture

The OpenCV in Tracking

The OpenCV in Tracking Service not show image in macOS and also in Windows, but in macOS the light of webcam turn on. I have lastest version, 2180

ShaunHolt's picture

Which grabber are you using,

Which grabber are you using, Saxos, OpenCV, etc? And what camera are you using? Did the camera work in opencv before?

Papaouitai's picture

I'm using OpenCV grabber. The

I try with VideoInput and also OpenCV but in the first I receive this error: 

videoInput.getPixels() Error: Could not get pixel 

and in the OpenCV start the camera (turn on the light) but the view is all black

ShaunHolt's picture

Just in case you don't see it

Just in case you don't see it in chat....

 

calamity: @papaouitai: try to initialize the Tracking service as explain here: http://myrobotlab.org/content/tracking-service  I have found that the initialization that you use seem buggy. That should be fixed, but I did not had the time to dig to found the problem
moz4r's picture

translator

Hi papa ! about translator it fixed few days ago . you need a new key form azure platform.

https://azure.microsoft.com/free/

AzureTranslator.setCredentials(apikey)

 

Papaouitai's picture

Ah ok, thanks

Ah ok, thanks

Papaouitai's picture

I create a script but the

I create a script but the opencv can't start filter

kwatters's picture

More detail..

Hi Papa,  you should provide a bit more detail. Is there an error message?  what script are you using?  Is it checked into github pyrobotlab repo ?  Have you tried the other frame grabbers such as the Sarxos grabber? The Sarxos grabber works in more environments than pretty much every other grabber that we have in the system.

-Kevin

Papaouitai's picture

Hi kwatters, thanks for

Hi kwatters, thanks for answer. I use this script:

‚Äč

xPin = 3
yPin = 4
arduinoPort = "COM7"
cameraIndex = 1
x =Runtime.createAndStart("tracker.x", "Servo")
y = Runtime.createAndStart("tracker.y", "Servo")
x.setPin(xPin)
x.setVelocity(-1)
y.setPin(yPin)
y.setVelocity(-1)
controller = Runtime.createAndStart("tracker.controller", "Arduino")
controller.connect(arduinoPort)
tracker = Runtime.createAndStart("tracker", "Tracking")
opencv = Runtime.start("opencv","OpenCV")
opencv.setCameraIndex(cameraIndex)
tracker.attach(opencv)
opencv.capture()
tracker.startLKTracking()

 

kwatters's picture

That's not the script you ran for your no worky

I'm taking a look at the no worky here:  http://myrobotlab.org/myrobotlab_log/upload/Papaouitai/1495498498.myrobotlab.log

the above script is not the one that you ran to generate that no worky..

It seems like you executed the following code:

webgui=Runtime.create("WebGui","WebGui")
webgui.autoStartBrowser(False)
webgui.startService()

#start speech recognition and AI
wksr=Runtime.createAndStart("webkitspeechrecognition","WebkitSpeechRecognition")
pinocchio = Runtime.createAndStart("pinocchio", "ProgramAB")
pinocchio.startSession("default", "pinocchio")
htmlfilter=Runtime.createAndStart("htmlfilter","HtmlFilter")
mouth=Runtime.createAndStart("i01.mouth","AcapelaSpeech")
#wksr.addTextListener(pinocchio)
wksr.addListener("publishText","python","heard")
pinocchio.addTextListener(htmlfilter)
htmlfilter.addTextListener(mouth)

opencv=Runtime.start("opencv","OpenCV")
opencv.setCameraIndex(0)
opencv.capture()
fr=opencv.addFilter("FaceRecognizer")
opencv.setDisplayFilter("FaceRecognizer")
fr.train()# it takes some time to train and be able to recognize face

def heard(data):
	lastName=fr.getLastRecognizedName()
	if((lastName+"-pinocchio" not in pinocchio.getSessionNames())):
		mouth.speak("Hello "+lastName)
		sleep(2)

 

 

that being said.. i'm still a little confused because the line number for the exception in your no worky does not line up to what I would expect in the source code..  

 

I see the following exception:

java.lang.NullPointerException
	at org.myrobotlab.opencv.OpenCVFilter.invoke(OpenCVFilter.java:114)
	at org.myrobotlab.opencv.OpenCVFilterFaceRecognizer.process(OpenCVFilterFaceRecognizer.java:431)
	at org.myrobotlab.opencv.VideoProcessor.run(VideoProcessor.java:502)
	at java.lang.Thread.run(Unknown Source)
------

 

but from what I can tell, the line in quesiton should not be throwing a null pointer exception..  only thing I could think is that the video processor was null.  I know Grog was refactoring some code there..

Perhaps his refactoring broke this?

 

 

 

GroG's picture

So the noWorky script posted

So the noWorky script posted by Papa - did not result in an NPE.
Although its more than just "Tracking" - Face Recognition wasn't part of Tracking' face detection.
Not sure what Papa is trying to do, but I don't think my refactoring on a seperate branch has caused the issues reported ..

kwatters's picture

VideoProcessor was null on the filter so invoke was blowing up

I just checked in a chance that I think will fix this error for you  Papa.  Try out the latest build , it should be worky for the error you sent last.

longer answer.

The issue was that in order for a filter to publish data, it's invoking a method on the OpenCV service to publish that data via the MRL framework.  The only way a filter can get to the OpenCV service that it's attached to is through the VideoProcessor.  The VideoProcesor wasn't being specific on the filter when it was attached to the opencv service.  This change makes sure that the video processor is set on the filter when it's added to the opencv service.

Papaouitai's picture

Script

I've done this script, it works but the servos don't move correctly

 

xPin = 2

yPin = 3

arduinoPort = "COM7"

cameraIndex = 1

controller = Runtime.createAndStart("tracker.controller", "Arduino")

controller.connect(arduinoPort)

x =Runtime.createAndStart("tracker.x", "Servo")

y = Runtime.createAndStart("tracker.y", "Servo")

x.attach(controller, xPin)

y.attach(controller, yPin)

tracker = Runtime.createAndStart("tracker", "Tracking")

tracker.attach(x, "x")

tracker.attach(y, "y")

opencv = Runtime.start("opencv","OpenCV")

opencv.setCameraIndex(cameraIndex)

tracker.attach(opencv)

opencv.capture()

tracker.faceDetect()

ShaunHolt's picture

What exactly are the servos

What exactly are the servos doing? Moving slowly, moving in wrong direction, not moving enough?

Papaouitai's picture

Not moving enough, I would

Not moving enough, I would control servos x and y of inMoov head 

ShaunHolt's picture

Have you made sure your

Have you made sure your settings for how far the servos can move are set correctly?

If not moving enough it makes make think that maybe its seeing somewhere that its range of movement is set for a certain amount of movement.

Does it move say, equal distance left and right... up and down... does it move further to the left than it does to the right?