python on windows skills needed

I would like to use aruco markers in my InMoov project. There are instructions of how to make a python wrapper for the library but it's for Linux.

I am stuck now as  I can't see the way to make it work for my CPython.27 w10/64 environment.

any help available?

GroG's picture

This looked potentially

This looked potentially relevant: 

our JavaCv I think is 1.3 and would need to be updated to 1.32 - then it could be ported as a filter.

juerg's picture

Does not look like a big step

Does not look like a big step to take. Thanks for your feedback. I will keep an eye on the version than, not in a hurry as other stuff occupies me too.

kwatters's picture

top codes , javacv is at the latest

I recently updated javacv to be the latest version 1.3  ,  So, this should be available... it should be a matter of just wrapping it in a OpenCVFilter in mrl to expose the functionality.

That being said, this filter seems very similar to the "top codes" service in mrl.  Top codes are similar in nature to these...  not sure if it does what you want, but it's something you might want to try/look at.

juerg's picture

topcodes look like top

topcodes look like top items!

Hi Kevin

Never noticed these but they look to be more or less the thing I was looking for. Aruco is not a must for me, an identifyable mark is fine.

I will have to experiment with some printed tags and will glue them to objects I would like to identify.

I did a lot of programming in my carrier but never with java and the highly refactured mrl is a puzzle to me. So to me it looks like out of question that I will be able to add an aruco service or filter into MRL.

However using the TopCode service in an mrl python snippet looks like beeing in the range of what I can accomplish. From the code it looks like it's scanning a fixed png file so I will need to find out how I can give it a captured image from Marvin's cam to analyze.

Thanks a lot for this hint


juerg's picture

on second inspection I found

on second inspection I found this statement:


  • The camera must be orthogonal to the interaction surface.

Now this will be a problem as I would like to position the robot orthogonal to the tag so from the captured frame I need to be able to calculate an approximate x angle, not a z angle (assuming z is the depth axis).

Maybe adding a square around the tag will allow me to do some estimates about x orientation.