It took too long surfing for this information, and its still scattered across the internet.
It took too long surfing for this information, and its still scattered across the internet.
Everyone has made great progress on Nixie release. The maven build, quick and stable dependencies, Yolo, DL4J, 100 of bug squashed. I thought I'd add a good seasons gift too ..
Long overdue - searchable tabs.
Now, I won't go crazy searching through all those InMoov tabs (where the hell is rotHead ?) :)
A small demo of using BoofCv to combine the depthmap and the videostream to get a realtime pointcloud.
https://github.com/MyRobotLab/pyrobotlab/blob/develop/home/Mats/OpenKinectPointCloud.py
kinect data replayed from saving "frames"
kinect capture with frames on a computer with a kinect sensor
Ahoy !
Locally I have changed all the references from Arduinos to ServoControllers in InMoov. This was done because of the refactoring I did for the JMonkeyEngine service. I think its the way InMoov should go anyway, and it has been a place of difficulty for builders who do not use Arduinos.
My question is - should I commit these changes now ?
This might mean :
Speed test with FaceDNN filter
Hey everyone just thought I would give everyone an update on my attention system. As I have been working on this I have had to climb one hill after another but Iam making progress I have made to camera grabbers one that does a standard cart type image the other does a logpolar type image the logapolar help me to overcome some identification issues and vergence on the eyes it allows me to give it more normal eye movements color segmentation, color processing, face detector, blob detector, hand detector, optical flow, and face tracker are done. I still have quite a bit of work to
I added force feedback feature to my BLDC actuator.
For this first test, I use a 20kg load cell and HX11 amplifier connected to my arduino pro mini.