Hi
Working on teaching my bot to move in front of an optical tag I run into the problem that seeing no tag at the current location - where to go to check from another spot in the room for the tag?
When looking for the tag my task (python) commands the bot to make a 360 degree scan at its current position - that is a 6 * 10 degree scan with "rothead" (using MRL's REST api), taking at each step an image and look for a marker) and 5 * 60 degree rotations with my cart (another python task controlled with rpyc). With each cart rotation (6 * 60 degrees) I take a kinect depth image, create a top view of the closest obstacles and add it to a floor map.
If no tag was seen I need to move to another location in the room as the tag simply might not be visible from the current location - but where to move to?
Thought about random directions but as I know by the kinect data which directions make sense and which not I implemented a map search. It creates a rotating circular mask and checks on obstacles in the combination of floor plan and mask. Directions with obstacles get eliminated and I can find a good candidate for the direction to move my robot to.
https://drive.google.com/file/d/1ILutHqP1ozZUUMZ5OT7ppOs1Izxhl5K5/view
In the short video the white dot is the location of my bot, lines represent the closest objects the kinect detected and the white circle marks the area the program is looking for obstacles. So my next step will be to move the robot to the right for maybe 2 meters and then have a look around again.
Juerg
?
Was the tag it was looking for available to bee seen?
Have you got it moving to the new direction and having a another look yet?
Are you using the SLAM service?
This does interest me as we will need navigation to find object when we get our robots walking
hi Ray thanks for your
hi Ray
thanks for your interrest.
I have a whole bunch of processes working together to reach my goal.
At the top is a "navigation Manager", a python task. It runs at the moment on my PC workstation. All other tasks run on a laptop mounted on the robot's cart.
At the bottom I have MRL (remote controlled through its REST API) and my own arduino program that controls the 6 IR distance sensors, reads out the bno055 and controls the drive motors.
In between is a cart controller. It listens for rotation and move commands from the navigation manager (by use of rpyc) and keeps track of the cart movements with the help of a floor watching cam.
I have also a depth image python task controlling the kinect and producing a top obstacle view map and an aruco python task trying to locate an aruco marker in an image and return its distance and orientation.
The navigation manager looks for a task given to it. One of them is "findMarker". This calls a procedure having 2 loops, the outer one rotating the cart 60 degrees and taking a depth image, the inner one rotating the head in 10 degree steps and taking a picture.
For each cart rotation step the kinect depth info is passed back to the navigation manager which implements the new information into a "floorMap" (in my video these are the lines representing the limits of movement)
For each head position a request to the aruco process (rpyc) returns either False or the distance and orientation of the found marker. The aruco process uses the eye cam as it has a much better resolution than the kinect image data and would also allow for different y angles (through control of the neck servo).
In case of a found marker the task "findMarker" is replaced by "approachMarker". This task tries to drive the cart in front of the marker (perpendicular to the marker with a 2 feet distance) and rotate it to point at it. I am planning to make the robot placing its index finger on the marker but that is not done yet.
In case no marker is found I run into this issue of "where to go next".
I had some issues with my eye movement mechanics in the head but once that is solved I will ask the cart controller to drive the bot to its new scan point and repeat the search. I will also try to take already existing observation points into account when deciding "where to go next".
One more thing I struggle with is the absolute orientation of my cart. The bno055 is supposed to be able to find magnetic north but with the device fixed on my cart I can not get that value returned. It works when moving the sensor around by hand but then jumps to the new value or reaching the new value incredibly slowly making it hard to do something useful with it. A big advantage of having an absolute orientation is that it would make it a lot easier to make use of floor plans already recorded in earlier runs.
I also tried to find a map building solution that will take care of moving objects and allow for map updates. The only promising thing I found so far is octoMap but I was not able to find information how to make use of it.
keep on going - I hope you can make progress with the walking - once you can do that be prepared of many new things to occur - e.g. where to go to :-)
---
SLAM:
I am not aware of a slam service in MRL (and do not see one in the latest version) but have been trying to find existing solutions for slam - so far without success.
To me - as a python guy - breezySLAM would be the preferred starting point however it requires a LIDAR sensor and as Simon (the maintainer of the library) would welcome a kinect version of it he also is of the opinion it would be a rather hard task to implement the kinect as a scan device.
To me more or less a full stop for SLAM at the moment.
Magnetic fields
Just a question, is the base of your robot metal?
most metals will affect the results of an electronic compass, antenuating the field to the point of it not being useable.
I would suggest mounting the bn055 on the body of your robot, away as best you can from any metal.
Just a thought.
I may yet have to work out some form of slam as well, i have just received in a new range finding sensor with a 2 Meter range indoors based on tjme of flight.
When i work it out I'll do a blog on it
Ray