hi, made only some language adjustions to your script and tried to run it.
I do not get the skeleton information with your script.
If I manually start capturing from the openni tab in the gui I do get the skeleton blended in but with the script I do not get the "onOpenNIData" function being called.
Wow
Great Job! That's fantastic. How are you maping the movements to the servo angles? Are you using inverse kinematics, or some other mapping functions?
Would love to see the code, did you do this in python?
Hi, yes in python with openni
Hi, yes in python with openni service.
Great Work Dom ! Well
Great Work Dom !
Well done.
Its exciting to see someone work on a similar thing with a different approach !
Cheers !
Thank you Grog.
Thank you Grog.
A good thing indeed! Have you
A good thing indeed!
Have you thought about generating gesture files from the movements and make them repeatable? Also including speed and pauses?
Would be much nicer than the manual way of having to define each of the servo positions.
Thank, Yes that is my goal in
Thank,
Yes that is my goal in the future.
Problem running the
Problem running the script
hi, made only some language adjustions to your script and tried to run it.
I do not get the skeleton information with your script.
If I manually start capturing from the openni tab in the gui I do get the skeleton blended in but with the script I do not get the "onOpenNIData" function being called.
Any hint?