Next version 'Nixie' is coming soon !

Help us to get ready with MyRobotLab version Nixie: try it !
Javadoc link
Example code (from branch develop):
#file : Twitter.py (github)
# start the service
twitter = runtime.start("twitter","Twitter")
 
# credentials for your twitter account and api key info goes here
consumerKey = "XXX"
consumerSecret = "XXX" 
accessToken = "XXX"
accessTokenSecret = "XXX"
 
# set the credentials on the twitter service.
twitter.setSecurity(consumerKey, consumerSecret, accessToken, accessTokenSecret)
twitter.configure()
 
# tweet all of your beep bop boops..
twitter.tweet("Ciao from MyRobotLab")
Example configuration (from branch develop):
#file : Twitter.py (github)
!!org.myrobotlab.service.config.ServiceConfig
listeners: null
peers: null
type: Twitter

References

[[Twitter.simpletweet.py]]

[[Twitter.uploadpicture.py]]

[[Twitter.uploadFromOpenCV.py]]

Here Leonardo Triassi has made a spectacular InMoov and its now running some of the same scripts from the InMoov service page.  Soon they will be sharing Brains !!!  Great work Mr. Triassi !

Javadoc link
Example code (from branch develop):
#########################################
# wolframalpha.py
# description: used as a general template
# more info @: http://myrobotlab.org/service/WolframAlpha
#########################################
 
#Start the Service
wolframalpha = runtime.start("wolframalpha","WolframAlpha")
 
#Beside using the GUI of the engine which works exactly like a usual search engine, one can use the engine with these methods.
keyword = "ape"
#Searches a keyword
print(wolframalpha.wolframAlpha(keyword))
print ("-----------------------------------------") #delimiter to see which output came from what method
 
#Does the same as print(wolframalpha.wolframAlpha(keyword))
print(wolframalpha.wolframAlpha(keyword,0))
print ("-----------------------------------------")
 
#Prints an html code, can be usefull for extracting the image links as example
print(wolframalpha.wolframAlpha(keyword,1))
print ("-----------------------------------------")
 
#Seaches a keyword and only prints the Category(pod), in the GUI the categories are the same as the bold titles.
print(wolframalpha.wolframAlpha("mass of the moon", "result"))
print ("-----------------------------------------")
 
#Searches the solution of a problem, if the solution consists of complex numbers or arrays, this may not get a proper result.
for e in wolframalpha.wolframAlphaSolution("3x + 5 = 7"):
        print (e)
print ("-----------------------------------------")
 
#This is another way of getting the result of a problem by using the pod = Solutions
for e in wolframalpha.wolframAlphaSolution("4x^2 - 3x + 5 = 7", "Solutions"):
        print (e)
print ("-----------------------------------------")
 
#This is yet another way of getting the result of a problem by using the pod = Solutions
print (wolframalpha.wolframAlpha("2x^2 - 3x + 5 = 7", "Solutions"))
print ("-----------------------------------------")
 
#With the pod one can get more than results of problems, alternative forms for example (Array which needs to be processed)
print (wolframalpha.wolframAlphaSolution("3x^2 - 3x + 5 = 7", "Alternate forms"))
print ("-----------------------------------------")
 
#Prints an html code, can be usefull for extracting the image links as example
string = wolframalpha.wolframAlpha(keyword,1)
print (string)
print ("-----------------------------------------")
 
#This is an example of ho to extract the image urls out of the html output from
#string = wolframalpha.wolframAlpha(keyword,1)
#The import statement is best done at the beginning of the script
import re
string = wolframalpha.wolframAlpha(keyword,1)
urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', string)
for url in urls:
        print (url)
print ("-----------------------------------------")
 
#Same example as above bnut instead ov extracting all images it only gets the image of the searchobject itself
#Can be combined with the ImageDisplay service
#import re
#string = wolframalpha.wolframAlpha(keyword,1)
#url = str(re.findall('Image</b><br><img src="http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', string))[26:-2]
#print (url)
Example configuration (from branch develop):
!!org.myrobotlab.service.config.ServiceConfig
listeners: null
peers: null
type: WolframAlpha

Most of the services I created manually, and used the gui to connect the services methods together.

Like joy.xaXisRaw - > panServo.move()

Here is a Python script to listen for the a certain button press:

    Well I am slowly but surely getting my InMoov hand built.  I am having an issue however with MyRobotLab.  I am looking right now just to validate that I can move and control all my servos before I mount everything in the forearm.

Here is an attempt of tracking using 4 PID:

2 PID are for eyes tracking

2 PID are for head : the head moves in order to reduce the angle of eyes from the center (90 degrees)

  • The next step is to know what is angle range for eye's servos
  • how faster eyes movement should be, respect to the head movement : in the script i made head less responsive than eyes

[[4PID.Tracking.py]]

example
This example reads the upper left corner of the screen:
 
AWTRobot awt = (AWTRobot) Runtime.createAndStart("awt", "AWTRobot");
awt.setBounds(0, 0, 100, 100); 
TesseractOCR tess = (TesseractOCR) Runtime.createAndStart("tess",
"TesseractOCR");
tess.subscribe("publishDisplay", awt.getName(), "OCR");
Javadoc link

TesseractOCR will use optical character recognition on an image to read English words.

Currently limited to Linux 32/64 bit, and Windows 32 bit. It is possible to run on Windows 64 bit by download ing the Java 32 bit JRE, and then starting MRL in 32 bit Java.

There is currently one method to use,
public String OCR(SerializableImage image)
pass TesseractOCR an image, and it returns a String of text.