I would ike to start a new aproach for making the following functions available in the InMoov service:

i01.headTracking.followLeftHand
i01.eyesTracking.followLeftHand

i01.headTracking.followRightHand
i01.eyesTracking.followRightHand

There has been some talk about kinematics for InMoov in the forum but afaik not an agreement or complete description of:
- center point
- agreement of directions to use
- definition of distances
- chain of links

I tried to read the "studywolf"s description and examples of inverse kinematics and to extend his 3 section example arm into an InMoov model.

https://studywolf.wordpress.com/2013/04/11/inverse-kinematics-of-3-link…

Basically it uses 3 parameter arrays for each of the joints
-  initial joint angle
- resting state joint angle  (for minimizing movements in the joint link to reach a position)
- the arm segment length

Using a W10 64 bit PC and Anaconda 2.7 I was finally able to get his example work (including many hours getting numpy to work).

Of course I am stuck now at how to extend his example into an InMoov model but wondered whether I could find any other member interested in this subject?

And my far out vision of it?  Tell InMoov to move his hand to a given (x,y,z) position and based on speed and joint angles to have the eyes follow the hand? Or go through a sequence of such points? Or tell it to grab something at a given location? 

I spent quite some time reusing Gaël's example of passing a ball from one hand into the other. It worked only occasionally and with a finely tuned fixed scenario of movements.

Having new functions available we could probably say:
- leftHand.moveTo(Pos1)
- leftHand.open [for placing the object on the hand (it's a self levelling hand, so it will not drop the object :-) ]
- leftHand.moveTo(Pos2)
- rightHand(moveTo(Pos2+z-offset)
- rightHand.close

kwatters

8 years 3 months ago

Juerg,

  Have you checked out my implementation of the InverseKinematics for the InMoov arm?  You can see it in the web gui by starting the InverseKinematics3D service.  It uses D-H parameters to describe the linkages in the InMoov arm.  Basically, you can give the IK3D service a point in space and it will attempt to move the joints to get to that point.  You can draw a straight line with the InMoov hand.  

Here's an example script that shows how it works.

https://github.com/MyRobotLab/pyrobotlab/blob/master/home/kwatters/InMo…

I have more examples..

http://myrobotlab.org/content/joystick-control-inverse-kinematics-servi…

http://myrobotlab.org/content/leap-motion-service-and-webgui-update

and more info about it here:

http://myrobotlab.org/content/d-h-parameters-and-inmoov-kinematic-model

 

thanks for all the links, I know now where to start and experiment

Edit: tried to see what is going on in the source code. I see a InverseKinematics3D.java class but could not find where the InMoov geometry is fed into the calculations?

Edit 2: @Kevin, did you create an InMoov robot definition for the RoboAnalyzer and if so are you willing to share it?

Hi Juerg,  

I'm more than happy to share it.

The DH Parameters are part of the InMoovArm class.

https://github.com/MyRobotLab/myrobotlab/blob/develop/src/org/myrobotla…

I used the RobotAnalyzer, but as it turns out , I wasn't super happy with it.  I found "MRPT" to be much better for working out the DH params of the arm:

http://www.mrpt.org/

My approach to solving the InverseKinematics is based on the "Introduction to Robotics" lectures on youtube from Stanford University and the book  "Planning Algorithms"   http://planning.cs.uiuc.edu/  ... In particular I used the "homogeneous transformation matrix"  http://planning.cs.uiuc.edu/node104.html

.. based on this, I do a numeric approximiation of the pseudo inverse of the jacobian matrix...  Using that, there's an iterative algorithm that moves the hand towards the desired goal ...    

The math can be a bit scary, but, (to my surprise), it works :)

Good luck!  Let me know if you have any questions about the code..  it's a little hacky now, but the basic fundamentals are all in place.  

 

 

 

thanks, found the DHLink initialization in the InMoovArm

but then this won't take the upper body turn, the stomag bending and all he head movements into account? So it would need to "add" the arm position into the arm connection point based on these 2 movements and we would need to evaluate also the eye positions?

Assuming these 2 <x>-related positions are known I can't really see the path to align the eyes to the left/right hand - but I have to commit that I do know very little of these things and will need to experiment and learn first a lot about it!

Hi Juerg,

  The current kinematic model assumes the shoulder is the base of the arm.  The rotation point about the shoulder is the origin.  the Z axis goes forward/backward.   Y axis is up/down , x axis is left/right   (with respect to the shoulder)  DH parameters define any arbitrary robot arm of any number of links.

  If you want the InMoov head to look at it's hand and follow it,  I would suggest looking at the head tracking stuff with OpenCV ...   Handle the eye alignment the same way as the face tracking...

-Kevin

 

  

Tried to look at the source code and think I can get the current palm point position in relationi to the shoulder attach point from InversKinematics3D.currentPosition (after assigning the arm I am interested in with setCurrentArm as in your python code example)?

Tried to look then into OpenCV but have to commit that I couldn't learn anythiing from it in relation to a proper moveTo command for neck, rothead, eyeX and eyeY.

But thought then by calculating the rothaed rotation point in relation to the arm connector and using the palm point retrieved from the ik-service I could calculate the proper angle for rothead? And similarly the neck angle? And use straight eye positions?

I found out that I am not very practised with these calculations but if you think it could work out I for sure will give it a try. I assume I could do that in the python script?

Hi Juerg,  

  The InMoov head has the ability to use the webcamera to keep a point in the center of the screen.  This is called head tracking.  It uses LK Optical Tracking and a PID controller.  The webcam with OpenCV will track a point that it's looking at and the PID controller will move the servos to ensure that the point stays in the middle of the screen.  It's probably a lot easier than trying to calibrate and compute the angles that you want the head/neck to be at.

 

I did play around with LK and head tracking for quite a while and also changed PID settings and all and even had a few direct conversations with Ale.

However - with my setups I found the tracking to be rather poor, especially up/down was almost impossible and following a red ball lost track regularly after a few motions (I use an odroid XU4 controlling InMoov and the frame rate of the camera is around 20 shots a second).

Anyway - you propose to have the LK-tracking set to my current hand position and then let LK do the tracking? But it will for sure loose control when I turn my hand 90 or 180 degree or even move into a position the standard InMoov eye will not be able to point to?

Will give a try to my rather calculated way of tracking and let you know.

@Kevin

I have this code now in my mrl-python-script

# 2016-01-31 juerg
# try to read palm position of left and right arm
 
ikLeft = Runtime.createAndStart("ik", "InverseKinematics3D")
ikRight = Runtime.createAndStart("ik", "InverseKinematics3D")
leftArm=Runtime.start("leftArm", "InMoovArm")
rightArm=Runtime.start("rightArm", "InMoovArm")
 
leftArm.connect("/dev/ttyACM0")
rightArm.connect("/dev/ttyACM1")
 
ikLeft.setCurrentArm(leftArm.getDHRobotArm())
ikLeft.addListener("publishJoinAnglesLeft", leftArm.getName(), "onJointAngles")
#ikLeft.moveTo(0,0,0)
print("left:  ", ikLeft.currentPosition())
 
ikRight.setCurrentArm(rightArm.getDHRobotArm())
ikRight.addListener("publishJoinAnglesRight", rightArm.getName(), "onJointAngles")
#ikRight.moveTo(0,0,0)
print("right: ", ikRight.currentPosition())
 
giving me this result:
('left:  ', (x=320.000, y=80.000, z=280.000 roll=0.000 , pitch=0.000, yaw=0.000))
('right: ', (x=320.000, y=80.000, z=280.000 roll=0.000 , pitch=0.000, yaw=0.000))
 
activating the moveTo(0,0,0) I get:
('left:  ', (x=178.276, y=-151.022, z=280.000 roll=0.000 , pitch=0.000, yaw=0.000))
('right: ', (x=178.276, y=-151.022, z=280.000 roll=0.000 , pitch=0.000, yaw=0.000))
 
and
ikLeft.moveTo(10,0,0):
('left:  ', (x=180.980, y=-149.687, z=280.000 roll=0.000 , pitch=0.000, yaw=0.000))
 
ikRight.moveTo(10,0,0):
('right: ', (x=175.464, y=-152.394, z=280.000 roll=0.000 , pitch=0.000, yaw=0.000))
 
Can you please help me understand the numbers? 
I thought x,y,z would be relativ to the arms shoulder connection point and 0,0,0 would be an unreachable point?
Using numbers like ikRight.moveTo(-10, 150, 200) results in
('right: ', (x=275.223, y=-53.509, z=280.000 roll=0.000 , pitch=0.000, yaw=0.000))
 
and using e.g.
i01.moveArm("right", 5, 90, 30, 10)
would also need a ikRight.moveTo() command to satisfy the ik?