Here's the link to the Kit webpage if you're interested.
yeah. I was always fascinated by the "swerve drive" drive trains too. You get a kit of standard parts which is like motors, electronics, and pneumatics. Then there is a rule book of what you can't use. Then there is a total $$ limit on the bot.
the bases, aluminum stock, and gears often look very similar in FIRST - is there a list of parts you can get - how do they manage materials? or is it a big free for all ?
this one looked pretty good -https://www.youtube.com/watch?v=h2KJlICN_KE
Face tracking using opencv + facedetect filter with my pan/tilt kit...
Facedetect filter is heavy, and webcam video flow delay from servos response...this leads to overshooting
GROG..i need a lighter facedetect filter.....video stream is slow when i apply pirdown and face detect :(
Here is the video...
CHECK IT OUT !
#file : Tracking.face.py
from java.langimport String
from java.langimport Class
from java.awtimport Rectangle
from org.myrobotlab.serviceimport Runtime
from org.myrobotlab.serviceimport OpenCV
from org.myrobotlab.opencvimport OpenCVData
from com.googlecode.javacv.cpp.opencv_coreimport CvPoint;from org.myrobotlab.serviceimport OpenCV
from org.myrobotlab.serviceimport Arduino
from org.myrobotlab.serviceimport Servo
xpid = Runtime.createAndStart("xpid","PID")
ypid = Runtime.createAndStart("ypid","PID")
xpid.setSetpoint(160)# we want the target in the middle of the x
arduino = Runtime.createAndStart("arduino","Arduino")
pan = Runtime.createAndStart("pan","Servo")
tilt = Runtime.createAndStart("tilt","Servo")
actservoy =90# create or get a handle to an OpenCV service
opencv = Runtime.create("opencv","OpenCV")
opencv.startService()# reduce the size - face tracking doesn't need much detail# the smaller the faster
opencv.addFilter("PyramidDown1","PyramidDown")# add the face detect filter
#print 'found face at (x,y) ', msg_opencv_publishOpenCVData.data.x(), msg_opencv_publish.data.y()
opencvData = msg_opencv_publishOpenCVData.dataglobal x
rect = opencvData.getBoundingBoxArray().get(0)
posx = rect.x
posy = rect.y
w = rect.width
h = rect.height
x =(posx + sposx)
y =(posy + sposy)print x
servox = xpid.getOutput()
movex =(actservox + servox)global actservox
actservox = movex
servoy = ypid.getOutput()global actservoy
movey =(actservoy + servoy)
actservoy = movey
print'x servo',int(actservox)print'y servo',int(actservoy)
tilt.moveTo(int(actservoy))returnobject# create a message route from opencv to python so we can see the coordinate locations
opencv.addListener("publishOpenCVData", python.name,"input", OpenCVData().getClass());# opencv.setCameraIndex(1)
"One Service to Rule them All ! "
The Runtime Service is the first service to start whenever MyRobotLab starts. It has the capability of installing, starting and releasing all other services. You can see the services it can control under the runtime tab in the gui. Some services require additional dependencies and have to be installed before they can be started.
A tutorial that explain how to use tracking service in MRL...
Python Minimal script
#file : Tracking.minimal.py
# a minimal tracking script - this will start all peer# services and attach everything appropriately# change parameters depending on your pan tilt, pins and# Arduino details
tracker = Runtime.create("tracker","Tracking")# setXMinMax & setYMinMax (min, max)# this will set the min and maximum# values of the servos
tracker.setYMinMax(10,170)# set rest x,y
tracker.setRestPosition(90,90)# set serial port of the Arduino
tracker.setSerialPort("COM12")# se the pan and tilt pins
tracker.setServoPins(13,12)#change cameras if necessary
tracker.setCameraIndex(1)# start the tracking service
tracker.startService()# set a point and track it# there are two interfaces one is float value# where 0.5,0.5 is middle of screen
tracker.trackPoint(0.5,0.5)# the other is integers which are pixel location# tracker.trackPoint(10, 120)# don't be surprised if the point does not# stay - it needs / wants a corner in the image# to presist - otherwise it might disappear# you can set points manually by clicking on the# opencv screen
Python FULL AND SAFE script
#file : Tracking.manual.py
# we can create all the peer services for tracker manually# giving them new names and different values# then we can attach them. This will give us access to # change any of details of the other services. This must be # done BEFORE "starting" the tracking service. # create a tracking service
tracker = Runtime.create("tracker","Tracking")# create all the peer services
rotation = Runtime.create("rotation","Servo")
neck = Runtime.create("neck","Servo")
arduino = Runtime.create("arduino","Arduino")
xpid = Runtime.create("xpid","PID");
ypid = Runtime.create("ypid","PID");# adjust values
eye = Runtime.create("eye","OpenCV")
eye.setCameraIndex(1)# flip the pid if needed# xpid.invert()
xpid.setSetpoint(0.5)# we want the target in the middle of the x# flip the pid if needed# ypid.invert()
ypid.setSetpoint(0.5)# set safety limits - servos# will not go beyond these limits
neck.setPositionMax(170)# here we are attaching to the# manually created peer services
Lucas and Kanade developed an algorithm which allows tracking of object through a video stream. The algorithm requires less resources and is usually faster in response than many other forms of tracking.
No additional hardware is needed for this tutorial
The MRLClient is a client adapter which allows other Java programs to interoperate with MyRobotLab. Although the myrobotlab.jar can be included and used in a Java program directly, it may sometimes be desirable to communicate over the network with an adapter.
The MRLClient jar is a small binary which uses the network to send and recieve messages to a running MyRobotLab instance. Other network adapters could be created for languages besides Java, but currently only Java is supported.