So, in an effort to make the mega InMoov scripts easier to manage and to make sure the InMoov can better respond to human speech,  I've been working on making it so that the InMoovs can share their gestures  and also make it so that you can reload the gestures as they are edited without having to restart the whole bot.

So,  what does this mean and how does it work.

Frist, there is the InMoov service.  We have come to love this service, it controls all of the servos, camera, kinect, pir sensors, etc..

Second, is the brain.  This is ProgramAB.  I have created a simple set of AIML files that I'm starting to maintain now for "Harry".  

There's the "ear" which is now using Webkit speech recognition from the new webgui  (AngularJS) in MRL.  

And speech synthesis using acapela speech.

Ok, so in order to make this work together things have been broken up.

1. there is a "small" script that loads all of the services and wires the callbacks.

https://github.com/MyRobotLab/pyrobotlab/blob/master/home/kwatters/Harry.py

2. there is a directory that contains 1 python file per gesture and using i01.loadGestures(directory)  will load each of the files in that directory.

https://github.com/MyRobotLab/pyrobotlab/tree/master/home/kwatters/harry/gestures

3. there is a set of AIML that powers ProgramAB that lets you execute any gesture that was loaded.

https://github.com/MyRobotLab/pyrobotlab/blob/master/home/kwatters/harry/bots/harry/aiml/gestures.aiml

The hope is that people can start sharing the "gestures" python files and that this will allow for more iterative development, because if you change a gesture file, you can just call  "loadGestures(dir)" again and it will reload the gestures.

 

Generally speaking the convention is each gesture lives in it's own file.

That file should define 1 python method that is the same as the filename.

 

That's pretty much it.  For now,  I've carved up all 101 gestures thatthe InMoov full script has...  I look forward to hearing peoples feedback about this approach to see how we can make it better..

Lastly,  The next step on this will be to "externalize" the configuration settings for all of the servos (min/max/rest positions) so that those could be stored in a separate file also...   

 

Enjoy!

scruffy-bob

8 years ago

I love the idea, however, if we're going to do this, I'm going to suggest that we re-normalize the gestures first to a new baseline numbering system.

What I mean is that every servo has a range of motion (in our case, we'll call it 0% to 100%).    However, what's 0% and 100% in actual servo settings is guaranteed to differ from Inmoov to Inmoov.    For example my jaw may run from an actual minimum servo setting of 10 to a maximum setting of 40, while someone else's may run from 0 to 30 and yet another may run from 20 to 80 to get the exact same range of motion.    

If a gesture file tries to move a servo to 70 (because someone's robot requires it) and another robot has a physical limit of a servo value of 40, the gesture as written today is likely to break somebody's servos.   Likewise, if someone that has a servo range of 10 to 30 is likely to not have the desired effect on the robot who's range is 20-80.

In my opinion, a closed mouth (or servo) should always be 0% (regardless of what the physical servo setting that represents for each robot) and a completely open mouth should be 100%.

However, each of the scripts are currently designed for a specific servo setting that may work for one robot, but may not be at all appropriate for another one. 

Since we have that cool "mapping" feature that Markus blogged about just last week, I'd suggest that each gesture script be re-written so that 0 represents the complete minimum setting of the servo and 100 represents the maximum setting for a servo.   If we do that, then with a simple mapping for each robot, the gestures can be completely reusable for all robots, regardless of what physical servo range they actually use for each movement.

In other words, I'm suggesting that each number in a gesture file represents the % the servo should be actuated, not an actual physical reference.    So, 0 would always represent the lowest settin that servo allows, 50 would represent that a servo should be halfway between the minimum and maximum range of motion, and 100 would represent a servo at the far end of it's movement.

This will require rewriting every gesture file and will require each person to develop their own mapping, but I think in the long run, it will make the gestures far more transportable.

I'm not sure I explained this well, so if you don't understand what I'm trying to say, let me know.

In short, I'm suggesting that each gesture be re-written to represent "logical" values (from 0 to 100% of the total possible motion), rather than the actual "physical" servo values.

If we can agree that's the right way to do it, I'll actually volunteer to rewrite all of the existing gesture files, according to the rules above, so we can all start at the same place.  Once everyone writes their own mapping function (which only needs to be done once per robot), then all the gestures should work on all robots exact ly the same way, regardless of how they have their servos implemented.

Another benefit of this is that it will be MUCH easier to tell what a gesture is trying to do because 0 will alway represent the minimum position, 100 will be the maximum position and 50 would be halfway.   In the example of the mouth, it's much easier to think (closed = 0, open = 100, halfway open = 50) than (closed = 20 and open = 55).   Likewise, an open finger would be 100, a closed finger would be 0.    As long as all agree on what 0 and 100 mean for each servo, the gesture files will be pretty easy to read (and create).

scruffy-bob

I agree to what scuffy-bob writes. But the 0% and 100% also has to be defined in relation to something. Most of them are easy, like the mouth. 0% => closed 100% =>  open. But others are more difficult, like the shoulder. If 0% is that the arm is straight down, then it can move only in one direction assuming that 0% is minimum and 100% is maximum. But on my build the arm can move from about 30 degrees back to 180 degrees forward. Perhaps it's not a problem. -30 degrees could correspond to -17%. It would still work with a correct mapping as long as a negative % is allowed.

And the suggestion to add a configuration file to have the servo mapping, min, max, rest , controller, pin and so on would be great. 

The downside of the suggestions is that it adds a new abstraction layer, and it can be hard for a new InMoov'er to get started. But in the long run, I think it's worth it, because today I can't see a good way to share gestures between different builds, unless they are built the exact same way. 

@Mats,

I think that there has to be a "default" for the InMoov limb movement.    If there isn't, I can't think of any reasonable way to standardize gestures.    I think you have to assume that the range of motion is about the same on any InMoov, but that the individual servo settings to cover that range of motion may be different.   Someone may have mapped a servo backwards (from 80 to 40), someone may have mapped it from (0 to 40) and someone may have mapped it from 40 to 80, but when you go from 0% to 100%, the range of motion should be similar on each InMoov.

If this isn't true, (like if you deviated from the standard InMoov build by allowing the head to rotate 360 degrees), then standardizing gestures doesn't make any sense.  The people writing the gestures would have no idea what your robot is capable of doing.    A "Look to the right" command only makes sense if "right" has an absolute meaning (e.g. the maximum limit of the robot's ability to rotate it's head to the right).   In this example, for rotneck, 0% may be "look all the way to the right" and 100% may be "look all the way to the left", but if your robot can rotate it's head 360 degrees, what does 0% mean now (other than it's possibly possessed by demons)?

In your example, if all of the InMoovs have a similar range for shoulder (-17 degrees to 180), then 0% would be -17 and 100% would be 180.   If all of the InMoovs have a range of 0 degrees to 150 degrees, then "put your arm down" would have a different meaning for your robot than all the others, so standardization wouldn't be possible.

Generating a default mapping for each robot should be relatively straightforward  (far easier than trying to take all the gestures people have now and remapping them individually for each robot).    All you'd have to do is run a test to test the minimum and maximum positions of the servo for each limb.   You pretty much have to do that anyway to keep from burning out a servo by trying to drive it past what the physical limit of the limb is.

Yeah .. there "has" to be a default or you could call it an archtype 
At the moment Gael's original InMoov is the archtype.

We had talked about this before and I was hoping "Virtual InMoov" could be the archtype for 2 big reasons :

1. Virtual InMoov angles would be easier and more accurate to measure.

2. Virtual InMoov would not change over time (gears wear, tendons stretch) - but mathmatics and vectors in a matrix do not.

I agree to what Grog write about using a Virtual InMoov as a the base model for all math. I want the math to work on any InMoov, even if someone choose to make a "mini" InMoov or just a bit longer or shorter legs....

If for example the arm points straight down, that would be 0 degrees. If it points straight forward, that wold be 90 degrees. If it points straight up, that would be 180 degrees. If it points a little bit back, that woudl be -15 degrees. 

Then we would have a 'Virtyual InMoov' that would be independant of any default configuration. The gestures could be defined in relation to this model.

The translation from the Virtual InMoov could be made by mapping the servos, and setting min and max values that corresponds to each builders physical limits. 

 

I don't see any conflict in having two archtypes.

1. The virtual InMoov that would be the reference for all math operations, like gestures, kinematics and so on. Degrees would probably be the natural unit  to use, even if the metric system uses radians.

2. Gaels build would define the default mapping from the Virtula InMoov to the physical InMoov. 

 

The next quastion is then:  How do we define the Virtual InMoov in a format that can be used for multiple purposes without making it to complex ? 

1. One file to define the VIrtual InMoov  ?

                 URDF is one suggestion, but it can't find that anyone has built a loader for it in Java. 

                http://the-diyer-diary.blogspot.se/2015/03/modelling-humans-and-humanoi…

                 Any other format that already has been used to simulate InMoov in a virtual environment.

2. One file to define the servo mapings and perhaps also controller and pin mappings.

 

 

Reading the Blender InMoov model directly from MRL could be difficult. The Phobos project contains a Blender to URDF exporter. 

http://ros-users.122217.n3.nabble.com/Phobos-3D-robot-modelling-with-Bl…

https://github.com/rock-simulation/phobos

Not sure if it's something that could be used. We would need a URDF parser to be able to use it as the base for the math. Perhaps I'm thinking too generic. If the joints and distances between them are defined in the InMoov service, then we dont need any external configuration. I don't want to make things more complex than necessary.

 

I think the control model is pretty simple .. a set of fixed length line segments and joints in-between.
I think we already have this - and with a control system applied to it - we can set what we'd expect to see with relative joints..

This would contain accurate measurements - which is great, but not very useful unless you can see it .. 

My preference for display would be to use Threejs .. but I have not implemented all of it yet...

hmmm .. positional info might be helpful even in numeric form before we are able to show the entire model correctly ..   this would be fun to work on .. got to fix Serial bugs first :)

GroG

8 years ago

 
I think most of us agree, we favor convention
over configuration ... "if" the convention is simple
and easy to remember.
 
I propose a "gestures" folder be auto-magically made if 
its not already there - and this is the default location
of gesture scripts
 
loadGestures()
method now calls loadGestures("gestures") ...     
 
if you want something instance "name" specific .. then you might
want to not call your instance i01 .. 
 
I think its funny everyone has used the i01 i created for shorthand
so many moons ago ..  the i01 is only around because I dont like typing
and wanted to be sure I could support multiple InMoovs i01 stands for
InMoov #01 :P  .. it could just as well be harry
 
Wonder when someone will experiment with this..
 
Potentially, couldn't all the callbacks be wired in the service ?
 
Glad we are having this discussion.  I think it would be possible to nearly reduce
most of the python
 
this is bitchin
 
cool - I assume you generated this based on file names or some other magic spell
 
but would it be advantageous to recognize a single key word e.g. 
instead of this
<category>
      <pattern>CLOSE YOUR RIGHT HAND</pattern>
      <template>CLOSE YOUR RIGHT HAND
        <oob>
          <mrl>
            <service>python</service>
            <method>exec</method>
            <param>closerighthand()</param>
          </mrl>
        </oob>
      </template>
</category>
 
heh, since I still don't know aiml .. i'm fudging it
but the concept is - you have some marker or litteral recognized, then *<- some other text
The some other text gets turned into a parameter and executes the expected gesture method.
It would allow not having to modify the AIML ..  drag and drop a file in gestures - then say "gesture (break dance) => python.exec(breakdance())
<category>
      <pattern>GESTURE *</pattern>
      <template>GESTURE *
        <oob>
          <mrl>
            <service>python</service>
            <method>exec</method>
            <param>gesture(*)</param>
          </mrl>
        </oob>
      </template>
    </category>
  
Or mabye even better - saying Gesture menu would put you in a context where the litteral is no longer necessary. And anything recognized would try and execute the appropriate python method.
    
 Love the 1 gesture == 1 file .. and all gestures in there own directory !!!
 
 When trouble shooting a bajillion gestures in a monolithic script makes things difficult.
 Gestures typically are the bad-boys .. usually its setting up something different .. like PIR or 
 opencv or some such stuff - and its hard to see these things through all the gesture data

The way I see it is that in order to calibrate from one inmoov to another, the servo angles will be a little off depending on how the potentiometers are orientated.

I believe that the servos can be calibrated with 2 numbers.

The first one is the "encoder offset"  (I will also refer to this as the "phase shift") of the servo.    This is the difference between the servo current angle to the desired angle.  

The second one is the "gain", this accounts for the gearing ratio in the joint.

In general the final angle for the servo is something like   w*t+theta    where theta is the encoder offset, and "w" is the gain.  For completeness I suppose I should show that it's periodic so a more complete view would be

( w * t + theta ) % 360    to ensure the value wraps around ...      

I think for most use cases, the gain will always be either 1 or -1.  but the encoder offset might be a few degrees  +/- for fine calibration.

Ignoring the "gain" , it means that we could configure each servo on the InMoov with just 1 number per servo, and that is the encoder offset.

 

thoughts?

 

From a math point of view, I strongly belive that two values always can do the translation from one linear function to another linear function. To quote Albert Einstein “Make things as simple as possible, but not simpler.” 

y = a + bx is the general form for a linear functiton, And even if a servo rotates, the movement is still linear, it you express it in degrees. 

To translate from one linear function to a different linear function is very simple, and already implemented in the servo mapping. So there is no need for a new offset value.

What we need is a reference model that can be used for the math, like gestures, forward and inverse kinematics and so on. Gestures should be defined using this reference model. Then the reference InMoov movements can be remapped to any physical InMoov using servo mappings. The reference model could be Gaels build, but I don't think that everyone can have access to his physical model. But a virtual InMoov model could be accessible to anyone.

I think that you already defined a math model to be able to calculate the forward and inverse kinematics. I remember that you talked about the Jacobian matrix ?    

Just my thougths.

I enjoy the discussions and having different opinions is good. I belive in consensus decision making, even if it sometimes take more time. In the end, the selected solution will have strong support.  

 /Mats

Hey Mats,

  I like how you think about the problem.  I understand that we have a mapping functionality already in MRL.  I guess at the end of the day, those 2 parameters yield the exact same transformation on the coordinate systems.

  I have been playing with DH parameters to model the InMoov arm.  There's a basic implementation of both forward and inverse kinematcis that uses a table of these DH parameters.  I used a program called MRPT (Mobile Robotics Programming Toolkit) to built up and test out the DH parameters.  I use this model in the InverseKinematics3D service to compute the forward kinematics and to also attempt to solve the inverse kinematics.   (the arm can't reach anywhere in space, sometimes there is no solution, or at least, joint constraints prevent iteration to the solution.)

  The one thing I noticed when mapping my DH params model is that the orientation and direction of the X axis of each joint was often pointing in the wrong direction, but a simple rotation would line them up.  This difference between the real model and the simulated model was easy to express as an array of encoder offsets.

  To say this another way, I found that an operation in polar coordinates was simpler to work with.  The nice thing about polar coordinates is that you can simply use addition to rotate something.  (Perhaps, this is why quaternions are so popular for these sorts of operations.)

  So, now in polar coordinates, the calibration can be expressed as 1 variable, just the angle offset.  There is 1 catch however, sometimes my DH model would rotate counter to the direction that I expected.  This is where I used the gain part of my calibration system.  I could set the gain to -1 and this would take care of the offset direction.

  The DH model for the InMoov arm can be used to solve the joint angles to put the hand (end effector) at a point in space.  In order to calibrate between the DH model and the physical Arm,  I only needed to add or subtract to those computed angles to calibrate.  (In one or two cases, there was a -1 multiplied in there to change the direction of rotation.)  

  The Inverse Kinematics module uses an iterative approach to come up with a solution.  It does, as you mention, compute the pseudo jacobian matrix.  The algorithm has some basic support for joint constraints, but it's very primitive at this time.  As it finds a solution, it publishes those joint angles.  I've added a simple interface for IKJointAnglePublisher and IKJointAngleListener   (I think I'll drop the IK from the joint angle publisher.)  

  There is also a PointPublisher and PointsListener interfaces, so the leap motion / myo / other can publish an x,y,z roll,pitch,yaw point, the IK3D service and translate and rotate that point to a new coordinate system so that it lines up with the robot's perespective.(coordinate system.)

  Here's a demo of the above stuff "working" :)