The initial movement of the stereoscopic pan/tilts for 38mm square  board cameras happened. .Gimbles - R & L


11 years 4 months ago

Can't wait to see the video, I'm very curious to see it move.  Curious too on how the top - (i think pan) servo is mounted - same linkage system as your tilt?

Are you going to use an Arduino at some time - you mentioned in the shoutbox that the current controller your using is this one - (Pololu Maestro Servo Controller)

My thoughts were to use Maestro's for servo control, tuning and power. We can tier many off an Arduino. I'd gone this rout over a year ago - Before the Adafruit/Renbotics 16 Cannel LED/Servo boards were made. I haven't slogged through the IC spec on the Adafruit yet. But, do not feel confident that the single IC tired would be as plug and play for many servos. Anyway, the maestro is installed or not? It works and I've the 6 channel micro for the eyes. And a Mini 18 for my arms.

Your thoughts?

Do you intend to use Arduinos ?   Have you used an Arduino in combination with the Maestro before?  If so can you attach or post an example of the control script ?


Yes, or some Atmel.

No, just thinking logically that an Arduino does not distribute enough power, have protection and has limited PWM pins. The Maestro boards can be plug-n-play tiered to the master brain - Atmel. I am not a programmer. Much behind.

I will be hooking up as Pololu examples indicate.

For learning and testing your MRL I will configure on Arduino as you are setting up. Then, trying to translate to add the Maestro.

I also have the 16 channel boards with I2C you guys are waiting on. Well, mine are the originals by Renbotics but, I've found they are poorly designed and have no power distribution or protection among other faults. So, these wll be just to test and throw around. :-(


11 years 4 months ago

I noticed this setup has twice as many servos as InMoov. Are you planning to make his eyes cross? :-)


11 years 4 months ago

Nice work bstott !

I know Gael was creeped out by the independent eye control - but I think its a good idea.  It might be helpful in augmenting software ranging, additionally if you do a Marty Feldman you can get increased field of view.  Most cameras have pretty poor field of view (e.g. 50 degree) - with 2 you could get 100 degrees.

I like this picture of predator versus prey.  Would it be cool if we could shift between the two ?

Here's an old slightly relevant post I did -


11 years 4 months ago

In reply to by GroG


Am reading your article - 03052013@1345 ET. Seems like you are ready to introduce your older knowledge into the game for Cortex and incorporation into MyRobotLab, InMoov and onward? Coolio.

Yes, Stereo cameras for FOV and depth of field - Having trig, the location of the two cameras in relation to each other and from the ground distances are not hard to calculate but, you did not mention possibly combined with short range IR or Sonar even more accurate calcs for distance and position awareness could be had. Start with cognition of self, add sensor data from close range to calibrate perception and relations then combine this data to calculate the distance too. ??? I'm thinking about your Cortex's intiatial development as it learns with stereo cameras collecting data about objects for fast distance calcs. It also seems that after a quick local calibration script the robot would have a standard table of constants to use for quick easy calcs. As this table of data collects - This almost sounds like the process would become automatic - Kind of sub-conscious? <scary> 0_0


[Edit] I just read more ---- You can disregard my above writing as just friendly noise....

I've been rambling for a while, trying to figure out what I want and how to do it...

I'll put together a diagram later so you can see the process flow which I was thinking about so far..

Sonar? IR? Sure.. bring it !  MRL was designed to aggregate data in a managable way, so definately, the more sensors the better..  Let them check one another and put it in a subsumption system which rate them.

I'm not too concerned with the speed of the trig - the more challenging part is to have the computer recognize locations in one camera to be the same in the other - once that is done - getting the disparity & ranging is pretty trivial.

SURF - I think will help, also OpenCV has stereo calibration routines - but I havent looked much into them.

Right now I'm pretty excited about the Tracking & Cortex service working together.  I was thinking this morning the Cortex might need to use OpenCV too - but the Tracking service will be using it all the time "looking" for new objects ...  what to do ?!?

CREATE A NEW INSTANCE OF OPENCV !   Aren't software play blocks great !  If you need a new one, you just call it a new name !

Heh, it's that easy.  Tracking will use one instance of OpenCV sitting on the webcam - looking for new objects, it finds a new object - saves a bunch of data - and throws it to the Cortex - the Cortex - instead of hi-jacking the OpenCV on the webcam - simply creates a new OpenCV and begins processing the data - looking for meaning in still pictures versus a video feed..

A diagram to come....