Currently, the different InMoov body part are moved by modifying the joints angle of the parts. It's working good, but we need to do a lot of input to do a simple movement.

What I think would be great is to be able to issue simple command, like move("fingertip", x, y, z) or move("rightPalm", x, y, z, roll, pitch, yaw) and have MRL compute the different joints angles of the body part to reach that point.

kwatters already write a very good InverseKinetic service that is a great start. That service is incrementing the value of the different joints to move the palm of an inMoovArm toward a specify coordinate.

I'm working in another model that use compute the best way to reach the target coordinate using GeneticAlgorithms. In that model, the computer start by guessing the value of the joints of the different part and weight the results by how far it is from the target coordinate, by how far the joints have to move (minimized movement) and by the time to reach the target coordinate (minimize the time it take to reach, using MaxVelocity value of the servo).

There is more things I want to insert in my model, like the current speed of the servo (easy, just have not done it yet) and a way to find out if the proposed movement is acceptable. By that, I mean that the movement should not allow the body part to hit another body part (or any objects set in the environment of the robot)

But i'm still not sure how to implement that last point

It need to be computed fast. so I probably can't use a complex 3D model of the parts to calculate if they cross,but using basic form like a cylinder is probably acceptable.

I need to be able to simulate the movement of the parts in time to look if the parts bump on each other

I'm just begining to think about that, so any input on how to implement this or any element that could be add to my model are welcome. I will be happy to describe more in detail how my model work if you need more details on it

 

juerg

8 years 1 month ago

The way we should head - agree most certainly. To me it looks like a "final goal" we would  like to reach. Comparable with the request to make InMoov run and not simply walk.

So maybe let's concentrate on the first goal - move to a specified 3d position. I did create a script that - based on all the servo angles - makes the eyes look at the right or left hand. As it somehow worked I understood that my math is not in shape to follow the different tutorials one can find on multi axes position calculations and reaching path's. So I am back to step -20, learning more basic math on khanacademy ;-(

Anyway - that is my own problem and not related to the goal you suggest. 

Getting anywhere closer to a solution we first will need to agree on a InMoov 3d reference point. For my script I used the mid stomag rotation center. Kevin's example used the omoplate rotation center as I remember but that is different for left and right side. Not sure - maybe the functions need to reside distributed in the objects sub-reference-points (e.g. like the wrist rotation center) and not in a global orientation function??

OK, assume we agree on a InMoov 3d reference point and the directions of roll, jaw and pitch.

Kevin has already given an example how the servo angles have to be set to reach a new position based on the current position. I also experimented with that and was able to move to a new xyz position. This however did not include more complicated things like mass and servo capabilities, acceleration and deceleration to move and of course not a "finger tip location" e.g. to switch Markus's lights off.

"Finger tip" is in my opinion quite a good example as it will be different for every InMoov that has been built so far and will be built in the future. The strings live their own life...

So we would need a "self calibration" of the robot, e.g. mark the reference points (elbow, wrist, finger tips) with recognizable flags and watch and record positions with different angle values given with the camera or (maybe easier or more precise) with the kinect. 

This looks to me like a lot to do before thinking about object collision or dynamic object tracking.

So - back to the first question - what could be the InMoovs 3d reference center (maybe keeping walking in mind)?

 

the 3D reference point can be anything, all we need to know is how the parts move in relation with the previous joint or the reference point. 

I also use mid stomac center as reference point because it's where my movement originate and it's also the fix part of my robot. So having that as reference simplify thing. In kevin example, he use the omoplate as his reference point because that is the fix point of his arm. I can use kevin arm just by modifying how his omoplate attach on my reference point (distance and angle). 

defining roll pitch and yaw is pretty much defining what is up/down left/right and front/back.

I think midTorso is a good reference point because it's pretty much the center point of the robot and a fix point in relation to all other body part (leg, arm, head etc).

 kevin found a really good way to define how each joints interract with the previous one with 4 parameters (quickly, the distance between their rotation point, the radius of the rotation point, and angles between the rotation points). each joints can be add together to define what is the "rightFingertip" or "leftFingertip" without changing the parameters of the common parts. So I really think it's the way to go.

Having a calibration system will be need to have different inMoov use the same model, but right now we don't really have a model to calibrate the InMoov with. And it's a bit why I start this discussion; to set up a model that we all agree on.

So yes, maybe i'm ahead with the step of object collision, but that is something I want to be able to achieve. Having a InMoov  able to move on it's own without breaking his arm because he got stuck on himself :D

 

 

I think the Virtual InMoov may be a good start.

The bone structure and the different parts are already defined. It can also be exported from Blender to a .json file that for example three.js can understand.

Three.js is a 3D engine that run in the browser. It's already available to use in MRL. For example the InverseKinematics3D service and the MPU6050 service use it. 

Perhaps that json file can be used as a base for the software model of InMoov. 

You can find a lot of infomation about the Virtual InMoov in this post:

http://myrobotlab.org/content/virtual-inmoov-webgui-start

Some other discussion about this subject from earlier this year:

http://myrobotlab.org/content/inmoov-programab-and-gestures

And if you search this site or Google for virtual InMoov, you can find a lot more information.

If we go down this path, then understanding the .json file and being able to load at least the bones to one or more Jacobian matrixes would  be a first step. 

 

I'm happy to see people chatting about these topics.  They're pretty fundamental to working with robotic arms, legs.. and really any set of rigid linkages.

The "forward kinematics" is understanding where you currently are.  This is always with respect to some reference frame.  For the DH parameters in my example,  I used the omoplate as the "origin" of my reference frame, but there's nothing to prevent you from just inserting another linkage and putting the origin reference frame somewhere else in the inmoov body. 

I think the most important thing is to have a set of DH parameters that describe the robot that you are trying to move.  Each set of DH parameters represents 1 linkage in the system.

The DH parameters turn into a set of "homogeneous transformation matricies"...  By multipling together all of those matricies you can come up with one full description of the entire system.   Amazingly, if you know the masses of each linkage, you can include those in the calculation, and all of the inertia and force calculations also become straight forward.

Once you multiply them all together, the x,y,z roll, pitch, and yaw can all be determined by the result.  This makes the math very quick and easy to do.    (I did not include roll/pitch and yaw in the "getCurrentPosition" but it should be realitively trivial to add those.)  

Now, the real trick comes in figuring out the "inverse kinematics".  Basically, there's a bunch of issues.  

1. there might not be a solution.  (something is out of reach)

2. there might be an infinite number of solutions

3. there might be situations where the matrix looses rank during the solution and you get a divide by zero / singularity problem.

The implementation that I added to MRL is one known as "gradient descent".  (Actually, this is pretty much the same algorithm that is used to train neural networks, but that's a different topic all together.)

The idea is that you know where you are, you know where you want to be.  You can add a small step to the x,y,z position and then you use the "pseudo-inverse of the jacobian" to compute the cooresponding change for each of the angles.  Then it's just a matter of itterating until you reach the destination.

There are problems with the inverse-jacobian, in particular issue #3 where you loose rank.  (I added a small amount of anealing to the algorithm to hopefully avoid some of those divide by zero isues.)  

In order to address the issues with the singularity  (divide by zero)  another approach was created by the irish mathmatician William Rowan Hamilton called the "quaternion".  This mathmetical concept was extended to the "dual quaternion" and many have used this approach to compute inverse kinematics.

In either approach  (genetic algorithms included) it's an iterative approach that attempts to solve these problems by iterating towards a goal....

My Implementation is based on the Stanford class that is free and online:

http://cs.stanford.edu/groups/manips/teaching/cs223a/

all of the lectures are also on youtube if you are interested 

https://www.youtube.com/watch?v=0yD3uBshJB0&list=PL64324A3B147B5578

 

 

juerg

8 years 1 month ago

Reference point

I agree that once you have defined the chain of links - and that should be the same for all InMoov's - you can traverse through all the positions in the link.

I was referring to your first request - instead of setting angles to joints we will request to move to an x,y,z position with a defined part of InMoov, e.g. the finger tip of the index finger of the left hand.

To do that you need to refer to an origin point your x,y,z position is relative to.

As Kevin pointed out it might be advantageous to use quaternions for the calculations to avoid the singularity pitfall.

So as a first step we could try to agree on point references and functions to

a) set a reference point (e.g. set3dReference(MID_STOMAG, x, y, z))

b) set a target point (e.g. move3dTo(LEFT_PALM_CENTER, x, y, z)

and have all the calculations for the joints angles done for that?

Once that is in place we could start to add things like joint limits, servo capabilities, masses, obstacles?

Hi Juerg,

  I understand the idea that you're getting at.  The way this would normally be done in kinematic models is that you would have a "dummy" link between your reference point and the first link.  This dummy link would provide a translation and rotation to get you from your origin to the start of the robot arm linkages.

There is currently a method in the IK3D service that does this for you.

createInputMatrix(double dx, double dy, double dz, double roll, double pitch, double yaw) 

This "createInputMatrix" provides a pre-translate/rotate operation so the moveToGoal is relative to any point you want in space.

the "dx,dy,dz" represent the translation in space, the roll,pitch, and yaw represent the rotation of coordinate systems.

The idea is that the IK service will be run off some sort of input, like a joystick, leap motion, kinect, or myo band.  The coordinate systems for those input devices are not necessairly the same as the origin coordinate system for the DH model, but if you translate/rotate the output from the joystick before it goes into IK3D then those coordinate systems can be made to be congruent.

Perhaps this does what you are looking for already?

-Kevin

 

I think that is very useful.

We also have to keep in mind that the IK3D service must be as generic as possible so it can be use by someone who want to use it for it's mega super duper dodecapode bot.

 

So the InMoov service can set up links (and referencepoint) that the IK3D service will use. I think keven already set things in this way, except that it was done for the InMoovArm only

its much too abstract to me. If its already there and only needs dummy links: Give Markus a script that 

a) moves his Inmoov into a position to handle the door handle (european, not a turning knob)

b) move a hand over the door handle and close the hand

now turn the wrist and pull/[push??] the door open in the correct arc given by the doors dimension and hinges 

this might include having the base move because of joint restrictions or unreachable positions.

juerg, some things are already there, but much more thing need to be add to be fully working. 

But things are not far to be able to do what you describe.

You need a way to know the location of the door handle

the IK3D service can already bring the hand to the location you want, but still don't position it the right way (roll/pitch/yaw) to grab the handle.

But before using it, we need some collision detection. I don't want markus to break his arm trying to push his arm in his own body.

the base movement can also be add as a link to allow Markus reach the position

There was once a discussion about calibration that used special optical marks opencv (kinect?) can recognise. Assuming his door handle is marked by such a tag can we get a 3d location of it and its direction (maybe with more than 1 tag)?

We could then try to move the base into a good opening position (shoulder of opening arm in a right angled position to the handle and shoulders parallel to the handle?

Or - before doing any movements - do you first want to evaluate moving space restrictions so we won't bump into anything before moving the base?

Hi juerg

yes, that's what I want an inMoov be able to do, but there is still a lot to do

  • Identify items in his surrounding
    •  I don't know yet how to achieve this, and I did not look at look at it yet. But I believe that with camera/kinect/other sensor it's something doable
  • Identify the target to reach
  • compute the movement need to reach that target
    •  IK3D can already do that, but need to be adjust to find the right orientation
    • still need to do a calibration system to allow each InMoov to easily use the service.
  • Simulate the movement and do collision test to validate the movement
    • I think all is in place to be able to implement this, at least with his own body part
    • With the IK3D, servo speed and maxVelocity, we can know what will be the position and orientation of each body part at each timeframe
    • If a potential collision may append, stop the movement, cancel or modify the movement to avoid the collision
    • This is where I want to work now as I believe this is an important behavior to have to avoid breaking anything
  • Execute to movement
    • We can already do that

This is what I think the chain of even that must happen to have a robot generate his own movement to be able to do task.

I have seen pictures of point clouds done by Kevin with the kinect. if - for simplicity - we use an optical tag on our target maybe Kevin could give us a kinect related 3d position of it? With mid stomag we might be able to center on it? For opening a door we would need however to be rather close to it and kinect might not like it? Borg some automated vehicle driving capabilities into mrl? I am sure GroG will like this idea!