kwatters's blog

LeapMotion hand positon / translation

kwatters's picture

Well.. I started testing the leap motion locally. I found that the leap motion, sitting on my desk can measure a lot of things about the orientation of my hand in front of my body.  It recognizes both left and right hand postion, orientation and finger direction as compared to the palm...

Initially, I am interested in tracking the position of the hand in the frame so I want to start getting a sense of the x,y,z  range reading.  

I see that the x,y,z values range between

Oculus DK1, DK2 and Direct Mode support for SDK 0.7+

kwatters's picture

An Agent has been detected in the matrix!

An Agent of the Matrix in the Rift

So, as it turns out the Oculus Rift latest SDK and Runtime no longer supports using "extended model" with the Oculus Rift.  Which means we need to update our support for it.  I have started working with LWJGL and some examples.  Above you can see a virtual screen rendered in front of the user in the oculus rift. 

The QA Countdown to Release

kwatters's picture

Here is a list of the current services in MRL so we can figure out what to QA.

Links :

WebGUI preview with google speech and program ab

kwatters's picture

Update: Here's a better preview using MaryTTS


Here's a sneak peak at the new WebGUI with ProgramAB and the speech recognition using Google Chrome web speech toolkit.


Boston Hack-A-Day meetup

kwatters's picture

So, I saw on hack-a-day that there was going to be a meetup at the Artist Asylum here in Boston.  I wasn't even aware of any maker spaces around here, so I just had to go check it out.  Awesome times, lots of cool projects, including.. yes..  a hover board... 


Gear Motor and the Elbow.. Work In Progress

kwatters's picture

Ok, So I realize that I'm editing and updating this blog in reverse.. but so be it.  Anyway, here's another little update on the progress with these bevel gear / robot arm joint project.  It moves!  It knows where it moves!  



Kinect Depth Map rendering in Blender

kwatters's picture


Well, as we open the gateways to blender from MRL, it seemed like a pretty obvious leap to start trying to render point cloud data from the kinect via OpenNI in blender.  I had the kinect write out to a text file all the x,y,z points.  In a small python script in blender, I loaded that text file and rendered small circles for each point.  

Taurus II Dexterous Telepresence Manipulation System, Model TMS2

kwatters's picture

Just came across this robot...  It's a bit more agile than the InMoov .  


Analog Signals, Software based DSP, Audio processing and more in MRL

kwatters's picture

Wow. this is a big topic.. I really just want to start the conversation about a few things in MRL.  An open source DSP software library TarsosDSP was identified as a nice library to help implement these DSP like functions.  The library says that it is for audio processing, but lets consider that audio processing is just analog processing.  Arduino analog inputs are definitely analog and should be able to be pumped into any self respecting DSP library.

How many blobs do you see?

kwatters's picture

New OpenCV filter  "SimpleBlobDetector" has been added.  It will watch a video stream and count up the number of blobs that it sees.  

There is no blob tracking, right now, it just tracks all points of interest so long as the new point of interest is more than 10 pixels away from an existing one.  

In the image below, I drew some blobs on a sheet of paper and held it up in front of the camera..  look at that, there are 5 blobs!