Next version 'Nixie' is coming soon !

Help us to get ready with MyRobotLab version Nixie: try it !

I feel like this is a dumb question, but I am going to ask it anyway.

I have been playing around with the vision examples that come with MRL. These are very cool and work well. The OpenCV GUI is very nice and friendly. It is what brought me to MRL. Playing with faceTracking.py, I see that that the location and size of the box is printed to the python window. My question is this, how would one send this data across a serial port?

On many robots, when they move their arm, it tends to sway back and forth as the motion stops. Do you use some sort of sensor system, and active control to compensate for this swaying?

Joe Dunfee

Javadoc link
Example code (from branch develop):
#file : Twitter.py (github)
# start the service
twitter = runtime.start("twitter","Twitter")
 
# credentials for your twitter account and api key info goes here
consumerKey = "XXX"
consumerSecret = "XXX" 
accessToken = "XXX"
accessTokenSecret = "XXX"
 
# set the credentials on the twitter service.
twitter.setSecurity(consumerKey, consumerSecret, accessToken, accessTokenSecret)
twitter.configure()
 
# tweet all of your beep bop boops..
twitter.tweet("Ciao from MyRobotLab")
Example configuration (from branch develop):
#file : Twitter.py (github)
!!org.myrobotlab.service.config.ServiceConfig
listeners: null
peers: null
type: Twitter

References

[[Twitter.simpletweet.py]]

[[Twitter.uploadpicture.py]]

[[Twitter.uploadFromOpenCV.py]]

Here Leonardo Triassi has made a spectacular InMoov and its now running some of the same scripts from the InMoov service page.  Soon they will be sharing Brains !!!  Great work Mr. Triassi !

Javadoc link
Example code (from branch develop):
#########################################
# wolframalpha.py
# description: used as a general template
# more info @: http://myrobotlab.org/service/WolframAlpha
#########################################
 
#Start the Service
wolframalpha = runtime.start("wolframalpha","WolframAlpha")
 
#Beside using the GUI of the engine which works exactly like a usual search engine, one can use the engine with these methods.
keyword = "ape"
#Searches a keyword
print(wolframalpha.wolframAlpha(keyword))
print ("-----------------------------------------") #delimiter to see which output came from what method
 
#Does the same as print(wolframalpha.wolframAlpha(keyword))
print(wolframalpha.wolframAlpha(keyword,0))
print ("-----------------------------------------")
 
#Prints an html code, can be usefull for extracting the image links as example
print(wolframalpha.wolframAlpha(keyword,1))
print ("-----------------------------------------")
 
#Seaches a keyword and only prints the Category(pod), in the GUI the categories are the same as the bold titles.
print(wolframalpha.wolframAlpha("mass of the moon", "result"))
print ("-----------------------------------------")
 
#Searches the solution of a problem, if the solution consists of complex numbers or arrays, this may not get a proper result.
for e in wolframalpha.wolframAlphaSolution("3x + 5 = 7"):
        print (e)
print ("-----------------------------------------")
 
#This is another way of getting the result of a problem by using the pod = Solutions
for e in wolframalpha.wolframAlphaSolution("4x^2 - 3x + 5 = 7", "Solutions"):
        print (e)
print ("-----------------------------------------")
 
#This is yet another way of getting the result of a problem by using the pod = Solutions
print (wolframalpha.wolframAlpha("2x^2 - 3x + 5 = 7", "Solutions"))
print ("-----------------------------------------")
 
#With the pod one can get more than results of problems, alternative forms for example (Array which needs to be processed)
print (wolframalpha.wolframAlphaSolution("3x^2 - 3x + 5 = 7", "Alternate forms"))
print ("-----------------------------------------")
 
#Prints an html code, can be usefull for extracting the image links as example
string = wolframalpha.wolframAlpha(keyword,1)
print (string)
print ("-----------------------------------------")
 
#This is an example of ho to extract the image urls out of the html output from
#string = wolframalpha.wolframAlpha(keyword,1)
#The import statement is best done at the beginning of the script
import re
string = wolframalpha.wolframAlpha(keyword,1)
urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', string)
for url in urls:
        print (url)
print ("-----------------------------------------")
 
#Same example as above bnut instead ov extracting all images it only gets the image of the searchobject itself
#Can be combined with the ImageDisplay service
#import re
#string = wolframalpha.wolframAlpha(keyword,1)
#url = str(re.findall('Image</b><br><img src="http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', string))[26:-2]
#print (url)
Example configuration (from branch develop):
!!org.myrobotlab.service.config.ServiceConfig
listeners: null
peers: null
type: WolframAlpha

Most of the services I created manually, and used the gui to connect the services methods together.

Like joy.xaXisRaw - > panServo.move()

Here is a Python script to listen for the a certain button press:

    Well I am slowly but surely getting my InMoov hand built.  I am having an issue however with MyRobotLab.  I am looking right now just to validate that I can move and control all my servos before I mount everything in the forearm.