2011.09.23

Today I heard a episode on radioLab ("Found") which a part discussed our ability to do localization.  Or in the robot world SLAM.  It described the process in detail with "grid cells", "position cells", "boundry cells", and "location cells".  There was also a part regarding location aphasia. 

Another segment included a woman linquist who studied an Aboriginal tribe which had something like 180 descriptions of heading.  When you greeted someone, you would not say "How are you doing", but more "Where are you going".  So within the language/culture it was very important to know your location and heading.  As the author continued, she said that at some point she was struck with "A birds-eye view" of her self and the things around her, and was able to communicate "intelligently" with this tribe.  This in my mind is related to another insight on radioLab, which was how language affected (or in a way) was thinking. 

An experiment was done where a rat is placed in a room with a blue wall.  The room is a perfect square and some food is hidden left of the blue wall.  The mice can do "left" and the mice can see "blue", but they don't get the concept of "left of the blue wall" and therefore do no better in their search for a treat.

The way the woman thought of a "birds eye image" and the rat's inability to grasp "left of the blue wall" are related to our ability to "abstract our surroundings" into a map.  And this in turn is related to robotics SLAM.

For the most part computers a great a details but not very good at generalization, they persist in a nearly complete abstract realm.  Strange that we (living things) have just recently begin to grasp abstraction, and for the most part lived most of our history in simple reactionary sensory stimulous mode.

What if we made the computer distinctly aware of the components & sensors & input it was recieving ?   Then globbed a nueral net behind it.

Just some thoughts of the day.

2012.01.10

Thinking about a differential drive service which will calculate heading distance etc and present the results on a grid. This would be the first step in trying to get a working SLAM service in mrl.

Below are some notes :

 

differential platform
 
calibration
left clicks per inch
right clicks per inch
 
(units in/cm)
width (distance between wheels)
wheel radius 0.75 inches
 
encoder
count 12 segments 
each segment is (2 X 0.75) * pi / 12 = 0.39 inches
each tick of an encoder represents (about) = 0.39 inches travel - this will deviate (find by observation)
 
if one wheel is stationary and one wheel travels 1 segment (0.39 inches) the resulting heading change is
 
platform shape (circle | square | triangle)
width
depth
 
distance between wheels
 
Great Reference Here:
 
B. Math
 
The simplest method of location calculation is for a differentially steered robot with a pair of drive wheels and a castering tail or nose wheel. Other geometries such as traditional rear-wheel-drive with front-wheel Ackerman steering can also work but may require more complex calculations, depending on how the vehicle is instrumented.
 
For the differentially steered robot, the location is constantly updated using the following formula. The distance the robot has traveled since the last position calculation, and the current heading of the robot, are calculated first:
 
(1) distance = (left_encoder + right_encoder) / 2.0
(2) theta = (left_encoder - right_encoder) / WHEEL_BASE;
where WHEEL_BASE is the distance between the two differential drive wheels.
 
Note the convention is (left-right) rather than what you may remember from trig class. For navigation purposes it is useful to have a system that returns a 0 for straight ahead, a positive number for clockwise rotations, and a negative number for counter-clockwise rotations.
 
With these two quantities, a bit of trig can give the robot's position in two dimensional Cartesian space as follows:
 
(3) X_position = distance * sin(theta);
(4) Y_position = distance * cos(theta);
You can multiply theta * (180.0/PI) to get the heading in degrees.
 
The robot's odometry function then tracks the robot's position continuously by accumulating the changes in X,Y, and theta 20 times per second, and maintains these three values: X_position, Y_position, and theta. These values create a coordinate system in which X represents lateral motion with positive numbers to the right and negative numbers to the left, Y represents horizontal motions with positive numbers forward and negative numbers back-wards, and theta represents rotations in radians with 0 straight ahead, positive rotations to the right, and negative rotations to the left.
 
The technique by which these values are used to steer the robot toward an assigned target position is beyond the scope of this article. (However, see the odometry.txt and subsumption files referenced above).
 
2012.01.10
Another important detail is the importance of calibrating instead of typing in WHEEL_BASE, RIGHT_CLICKS_PER_INCH, & LEFT_CLICKS_PER_INCH.
 
I already created a Differential Drive Service, however, it was used in conjunction with optical tracking.  No I am interested in creating one which will work with wheel encoders.  Hopefully, I will be able to merge the two into something better than either.
 
Ultrasonics would be usefull too.  I dont know their resolution but it would be nice to be able to decern the "line" of a wall.
 
Reference : Polar to Cartesian (and back again)
 
// polar to Cartesian
double x = Math.cos( angleInRadians ) * radius;
double y = Math.sin( angleInRadians ) * radius;

// Cartesian to polar.
double radius = Math.sqrt( x * x + y * y );
double angleInRadians = Math.acos( x / radius );

That site is greate - here is an example of image rotation - http://mindprod.com/jgloss/affinetransform.html#ROTATING