hi

iam having HC-05 bluetooth module, arduion, bluetooth dongle, chasis with dc motors and driver, and a dc supply. now i want to tarck my robot from the camera placed in the celining of the room. as we discussed in the shout box first the robot need to be mapped.

how we going to do that? bot should know where it is the map or the image we are captureing. we cannot beleive only the camera to locate the robot hence we can add triangulaion mapping like placing two i.r in the room and reciver in the robot. some one telling me this helps in navigation.

the doubt i have in this is how a i.r recrvier can know from which direction it is receving the i.r(from left or right ). i think it knows only it recevies i.r not the direction then how the triangulatuion going to happen if i dont know the angle and direction.

coming to the image processing how iam going to implement the WAVEFONT ALGORITHM in lmr and what are the filters i have to use to achive it.

and i ask the members of MRL to make series of hello world to advanced tutorials which will help the beginners like me. i can mrl is packed with lots of feature but i dont know how to use them. preforably text based tutorials and some vedios too because there are many persons like me using the slowest internet connection.

 

GroG

11 years 4 months ago

Hi vikirobot,

Thanks for posting.  It is possible to track  a robot with only the overhead camera, but I still need more infromation.

  1. Is the overhead camera actually in the location you want to use? 
  2. what is the overhead camera plugged into ?  Is it a webcam - usb?
  3. have you tried using thie usb webcam with MRL ?  If so what were the results, if it worked can you post a couple pictures 
  4. do you have a simple h-bridge circuit - how is it connected to the Arduino ?

 

1)yes iam having a camera with usb connected to the pc

2)refer 1)

3)some days before you told me about tracking color using mrl (HSU ). i tried it first time its not working but second time iam abled to move the sliding button from max to min for teacking the object then some error occured .mrl closed automatically. i think i already send you no-worky 7 or 8 days before.

4) i have l293 chip in bread board wired to uno. it is working fine.

GroG

11 years 4 months ago

In reply to by vikirobot

If all is working up to that point then hit the "record frame" button on opencv - pointing at the robot,  without any filters on - and post the picture.  When you hit the button a single jpg should be saved to the directory you installed MRL into.

GroG

11 years 4 months ago

Stop MRL and restart it then

put this in the python tab and hit execute - it should reduce the size of your video stream

 

opencv = Runtime.createAndStart("opencv", "OpenCV")
opencv.addFilter("PyramidDown","PyramidDown")

GroG

11 years 4 months ago

It still looks wrong. 

It should look like this -

checking the "no-worky" you sent you are at  build 1172 - we are currently at 1407 ... you are at least 235 updates behind. My recommendation is to download the latest release. I understand you have a slow connection. After you do download the initial 1407 release you can easily maintain the lastest updates by doing bleedingEdge updates - look here for detailed directions - http://myrobotlab.org/content/helpful-myrobotlab-tips-and-tricks-0#bleedingEdge

if your going to be controlling the robot over a significant area - you need a wide field of view.  Most webcams have a narrow field of view (24 degrees) while the human eye has closer to 95 degrees. Moving the camera higher can allow a higher field of view.

A robot is on my floor - but for easier tracking it needs an easy identifier - a white sticker would probably be good.

Yes, it certainly is possible.  And yes it does add complexity, you might want to quickly search to see if you can find a wide angle web-cam or wide angle lens to fit your current webcam.  It certainly is possible though.  If I were you I'd just begin using what you have then figure out the parts which need improving.