This summer, my friend Bill Mania and I entered our robot in the ChiBots SRS RoboMagellan contest. To steal the description directly from the website:
Robo-Magellan is a robotics competition emphasizing autonomous navigation and obstacle avoidance over varied, outdoor terrain. Robots have three opportunities to navigate from a starting point to an ending point and are scored on time required to complete the course with opportunities to lower the score based on contacting intermediate points.
Basically, we had to develop a robot that could navigate around a campus-like setting, find GPS waypoints marked by orange traffic cones, and do it faster than any of the other robots entered.
To give you an idea of what this looked like for us, here’s a picture of us testing in Bill’s backyard:
For our platform, we used a modified version of the CoroWare CoroBot, with additional sensors like ultrasonic rangefinders, a 6-DOF IMU, and wheel encoders.
Our software platform was ROS — rospy specifically — and we made liberal use of various components in the navigation stack. We were even able to attend the very first ROSCon in St. Paul, MN, which was a blast and greatly expanded our knowledge of the software and what it was capable of.
Over the next few weeks, I’ll be writing more detailed posts about the robot and specific challenges we faced, including:
- Hardware and sensor overview
- Using robot_pose_ekf for sensor fusion of IMU + wheel encoders to allow us to navigate using dead reckoning
- Localization in ROS using a very, very sparse map
- Our attempts to use the move_base stack with hobby-grade sensors, and why we ended up writing our own strategy node
- Using OpenCV + ROS to find an orange traffic cone, and using this feedback to “capture” the waypoint
In the meantime, enjoy this video of the above scene, from the robot’s point of view!