There was a lot of discution about the servo mapper this week-end and we did not seem to found a consensus how to fix the problem related the it's usage.
To have the servo service work properly, the user should be able to
There was a lot of discution about the servo mapper this week-end and we did not seem to found a consensus how to fix the problem related the it's usage.
To have the servo service work properly, the user should be able to
Any idea why the ShoutBox is empty this am - and any attempt to post fails?
I started to test the Servo service to be able to make it work again.
I made some testing on version 1723 and the repeated the same tests on the latest version.
InMoov, mouth, speech and eyes
Speech
Mouth and Eyes
Servos often use potentiometers to measure the position. That works fine if you have a good quality potentiometer. A different alternative is to use an optical or mechanical decoder. You can find examples of 3D printed rotary encoders here:
http://www.thingiverse.com/thing:1957311
It's made in two variants.
Natural encoded ( i.e. simply using binary form where a hole represents 1 and no hole = 0).
Gray code. Gray code has the advanage that only one bit changes for each step of the rotation.
There was a long discussion about speed control and naming of the servos with calamitys changes.
On my long flight from Miami to Zurich I concluded that speed control must be a stepwise process commanding the servo to do a lot of little steps according to loop time and speed settings.
So it should be possible to see whether the joint did move during the last increment or not?
Seeing that we are not at the target destination but also did not make any progress in the last / 2 last increments is telling us that
I have MRL 1861 and it's supporting MRLcomm.ino loaded onto my two Arduino MEGA's
The Merge them All! is such a WIP that it only runs the hands and wrists, but not the arm, head, mouth, torso etc. Even though there are Arm commands and my ScriptType is set to Full. I've been unable to get anything else to move (on purpose) with this mysterious setup sans any meaningful documentation.
A little video for demonstration of the IntegratedMovement service.
The position of the cymbal and the tom are scripted, but how the inMoov choose to move the stick to the tom or cymbal is fully determined by the service.
I got Azul running on MRL1859 and arduino ver. 53 except for the mouth does not move when Azul speaks. Any ideas? posted video on youtube Azul happy New Year. I have tried a pi3 but this was using windows 7 on laptop.
This past week I made my Hack-A-Thon project to add MaryXML a valid input for speech inside of MyRobotLab. I decided to add this to improve the capabilities of MarySpeech inside MRL. MaryXML allows the input of markup language that allows the voice to be modified in the middle of the file.