odometer for InMoov cart

In trying to add some self control to my InMoov I work on making it find an ARUCO marker and position itself in front of it using its macanum wheel base.


A disadvantage of the mecanum wheel base are the different possible movements and - depending on the surface - quite a lot of slip which makes wheel encoders  a poor solution.


In addition I have been pointed to and read quite a bit about SLAM and its use in improving navigation skills for the robot. One thing I read is that without odometry only poor results can be achieved. So I started to experiment with images taken from a camera pointing straight down at the floor.



I started to google for existing solutions for this and found that opencv has some nice functions that should allow me to retrieve shift distances between 2 sequential images.


After playing around a bit with images and algorithms I was able to achieve the following:

  • Find keypoints and descriptions in image A

  • Find keypoints and descriptions in image B

  • Call a matching function for these points
  • Use the matching key points to calculate and visualize the shift distance of the 2 images


(here the 2 images are shown side by side with connections between the matching points)


Mounting a cam on my cart and driving around immediately showed that my cheap webcam was not up to the job and delivered only slurry images while in movement. With these images I could not get sufficient keypoint matches. Looking for a better cam I found the PS3 Eye which promised  frame rates of up to 60 for 640*480 images. Found a used one advertised and payed 20 bucks for it. Found also a windows driver (3 bucks) and instructions how to set frame rate and image size of the cam.


        cap = cv2.VideoCapture(1)

        cap.set(cv2.CAP_PROP_FPS, 80)

        cap.set(cv2.CAP_PROP_FRAME_WIDTH, 320);

        cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 240);


Results improved but I still had a bit of a timing problem running the code (about 300 to 400 ms per image comparison with my laptop (core I7 7500, Intel HD 620, python 32bit). I found then ORB and bruteForce matching and after many tries with thresholds and other ORB-settings finally have a solutions that uses only 100 to 150 ms per comparison on my laptop (about 7 ms!!! on my main PC with a GTX 1070) and showed rather good results.


orb = cv2.ORB_create(nfeatures=250, edgeThreshold = 4, patchSize=20, fastThreshold = 7,  scoreType=cv2.ORB_FAST_SCORE)


This way I can command the cart to move a certain distance and then stop.


On the cart I also have a BNO055 installed so I know the current orientation of the cart. In addition I am currently trying to take depth values from the kinect and try now to combine these with the position and orientation values from the cart to draw a map of my room.


Initially thought I could find existing python projects the help with creation and update of a floor map but results are rather diffuse. Octomap looked promising but I was not able to find instructions or examples how to make use of it yet.



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
GroG's picture

Video Odometer would be

Video Odometer would be pretty cool...

But what about just dropping a usb mouse from the center of the cart ?

Almost all mouse data is the same .. it sends counted clicks of X & Y ..

Not trying to dissuade you from the video processing ... but it could be part of your (buzz word warning)  Sensor Fusion  :)