I played a little with the Jetson Nano Hello Ai World demo. It's impressive how fast it all goes. Now I ask myself whether it is not possible to use the Jetson as a brain for object recognition, face recognition, etc. for Inmoov. Instead of using Opencv in myrobotlab, the various data from e.g. .detectnet etc. would be fed into myrobotlab by the Jetson.
Is that generally possible and how should something like this have to be set up?
Establishing a connection to the Windows PC via Putty and running a stream via VLC is one thing, but how do you get the recorded data into myrobotlab?
Hello Pepper, MyRobotLab
MyRobotLab implements a Publish/Subscribe pattern.
So getting data into MRL would be to "publish" it into MRL.
On top of the pub/sub framework, mrl has several connectivity possibilities.
They are :
REST would be probably the easiest to begin familiarizing yourself with.
An superquick example would be using the two URLs in the browser.
This one starts a mary speech service
This one sends "hello" to be spoken by the just created mary speech service
If the other application is capable of sending data via http, MRL is capable of accepting it.
If the other application is capable of running Python, you can write a python scripts that sends the data over http to mrl.
Sounds so easy
Start small, experiment with
Start small, experiment with the REST URLs I gave you...
would you be very surprised if I can't even do something with it? I have absolutely no idea what to do with it. When I enter the two examples in the browser, I only get the message that the page cannot be opened. Does Myrobotlab have to be open for this? I'm apparently too stupid for that, right?