![]() |
|||||
|
|
|
|
||||
MITRE Participation in the DARPA Grand Challenge By Mike Shadid and Ann Jones Tune in to the nightly news and you may well see an unmanned vehicle system in action. Unmanned aircraft are patrolling the airspace in Iraq, while ground robots are inspecting vehicles and detonating roadside bombs. However, these robots tend to be limited to controlled environments where robot developers know exactly how their systems will perform. A battlefield is not, and will never be, a known and controlled environment. For unmanned vehicle systems to truly take their proper role in war—removing humans as far as possible from danger—robots will need to operate independently in unknown environments. With this need in mind, the Defense Advanced Research Projects Agency (DARPA) sponsored the 2005 Grand Challenge, an event designed to promote the rapid development of autonomous unmanned vehicle systems. The goal of the Grand Challenge was to design and develop a vehicle that could autonomously navigate a difficult, 135-mile, off-road course without any human intervention. MITRE believed this challenge provided a good opportunity to develop a test platform for our sponsors, gain a deeper understanding of the problem space, and engage with the robotics community. Through a year of intense development, the MITRE team transformed a commercial Ford Explorer Sport Trac into a fully autonomous vehicle that ultimately qualified for one of 23 spots in the final competition. The MITRE Meteor The team had just 12 months to move from a commercial off-the-lot truck to a vehicle that could operate autonomously in desert terrain. To meet this goal with any chance of success required a solid plan and considerable discipline. We laid out a series of goals at one- to two-month intervals. The first was to create a drive-by-wire platform. The second was to drive autonomously from waypoint to waypoint. The next was to follow multiple waypoints with simple obstacle avoidance. Each of these early goals involved a series of tests culminating in a demonstration. After a company-wide contest, the MITRE-sponsored entry was named the MITRE Meteor. Then came the tough part: building it. To house the sensors and processors, we added self-contained reusable component racks to the Meteor. The largest component rack was mounted on the top of the vehicle and provided the platform for the global positioning receivers, inertial navigation system, and magnetic compass unit. In addition, it housed two lasers that pointed sharply down at the road, looking for hazards such as large rocks or boulders. A second sensor rack was mounted to the front grill guard assembly and housed the primary obstacle and road terrain laser scanners. These sensors were responsible for helping steer the vehicle around larger obstacles such as parked cars and through tunnels. To house all the computers and power equipment, we removed part of the rear passenger seat and added a component rack.
The assembly was shock-mounted in five places to minimize the effect of driving on rough terrain. In addition to the computing hardware, the front passenger bay contained a mounted monitor, keyboard, and mouse allowing an observer to oversee and interact with the system during testing. Off the Shelf MITRE's advice to sponsors who need rapid and robust capabilities is to use COTS (commercial, off-the-shelf) components where possible, so we followed our own advice. This allowed us to devote more time and resources to innovation. Moreover, to be relevant to our sponsors, we wanted to develop a vehicle control system with inexpensive, easy-to-acquire sensing and processing platforms that could be generalized to many different types of vehicles such as Humvees, tracked vehicles, or large trucks. We used COTS components from several different fields of technology. We acquired the drive-by-wire capability from a retrofitting company that makes vehicles usable for the impaired. We installed global positioning sensors used by farmers for precise field tending. The inertial navigation system units we used to measure vehicle tilt came from a commercial robotics company. And our laser rangefinders, used to detect obstacles and other cars, are commonly used in industry. By adopting the COTS philosophy, we were able to have a vehicle ready to start testing within three months. Smart Software Now the Meteor needed some brains. Our software development was driven by two overarching themes. First, design in small increments using a "develop, simulate, and test" methodology. Second, maintain a continuous end-to-end system built with components of comparable complexity and quality. This approach meant that at any time the vehicle had all the necessary components to operate, and it placed the emphasis on the interaction and integration of agents. The development of a generalized software architecture would support different types of vehicle platforms and the integration of multiple units. Even though it initially took considerably longer to design and develop a reusable distributed-agent software architecture, the software proved to be robust and allowed for ease in adding capability and functionality.
The Competition Begins The initial test of the Grand Challenge was to produce a video demonstrating the vehicle's autonomy. DARPA would grade the videos to determine which of the initial 195 teams would qualify for a site visit. MITRE's video showcased the Meteor autonomously driving about a farm in Virginia and avoiding trash cans in a MITRE parking lot. The video impressed DARPA enough for it to schedule MITRE for a site visit. The Meteor would have to prove that it could navigate a 200-meter path and avoid randomly placed trash cans, all while operating at speeds of up to 25 miles per hour. Now the pressure was really on. We knew DARPA could place the obstacles anywhere along the 200-meter course. We needed to prepare for every combination. While rigorous testing had eliminated any significant software bugs early on, the tuning of the many interrelated parameters was painstaking. The test course we chose was not straight or level, so there were places on the route where the Meteor could encounter an obstacle after swinging around a corner or coming up over a hill. If the obstacle was in just the wrong place the Meteor would hit it. Just two days before the site visit, we seemed to have most of the parameters tuned so that the Meteor would successfully navigate the course. And it did! We passed our site visit and qualified for the next round. The next official step would be the National Qualifying Event in September.
Testing in the Desert We knew that farms and parking lots no longer provided a sufficient test for the Meteor. It was time for a week of testing in the desert. We wanted to get as close to a final configuration as possible, but we also knew we would need to make changes based on what we learned. There just wasn't enough time! The team headed out to the Mojave Desert in the middle of the summer.
The Meteor drove autonomously at speeds up to 35 mph for tens of miles
at a time, despite daily temperatures that reached 129 degrees. We came
back from the desert exhausted, elated, and with a ton of work still to
do. We needed to remount the computer rack that had shaken loose, adjust
the laser sensors to improve low obstacle detection, and develop new algorithms
based on testing. The team also had masses of data to analyze and a few
short months until the competition. The Qualifier Finally it was time for the National Qualifying Event. DARPA had invited 43 teams to the finals; the qualifier would knock the number of teams down to 23. We traveled back out west to the California Speedway where waiting for the Meteor was a grueling, 2.7 mile obstacle course that included gates, hills, a mock tunnel, a hay bale maze, tank traps, and several cars parked along the route. When we surveyed the course we felt confident that we had prepared well. Now it was up to the Meteor. DARPA scheduled four runs for each participant, but would allow for more if there was time. Runs were evaluated based on the number of obstacles avoided and the time to complete the course. On our first run the Meteor pulled out of the start chute and headed for the first gate. At the last minute, it veered away and headed for the bleachers where it stopped. Of the several-mile course, the Meteor had only traveled 100 yards and for a few minutes. We were stunned. We had two days until our next chance. We started combing through the data to find out what went wrong. Because we were able to record all the information, we were able to make adjustments in the field. It took two more attempts, but we made it successfully around the entire course. We then added three more successful runs and qualified for the final event. The Grand Challenge Of the 195 serious entrants in the Grand Challenge, 23 made it to the finals; we were one of the select few. Lying in wait for the finalists was a course that cut through 135 miles of desert, passing through two tunnels, over a dry lakebed, across railroad tracks, around a beer bottle pass, and up a steep mountain range before returning to the finish line. Race day finally arrived. The Meteor left the chute in front of a large crowd cheering us on. Driverless, our vehicle made its way in front of the stands and headed into the desert. Sadly, the robot traveled a little over a mile before it ran into problems with blowing sand. Winds in the desert can blow quite hard and stir up large volumes of sand and dust. We had encountered similar dust clouds during our earlier testing in the desert. The laser scanners would detect them and report them as obstacles. The vehicle would stop or slow down, but once they blew past, our robot continued. On race day we suffered a constant series of sand clouds blowing along the right side of the Meteor, forcing it further and further to the left. Finally the Meteor was facing tall plants and bushes along the side of the dirt path. Since the sensors couldn't tell the difference between plants and rocks, the robot erred on the side of caution and stopped. Though we achieved our goal, making it all the way from a dream to the finals in just under a year, it was still disappointing to see the Meteor stop so close to the starting gate. Lessons Learned MITRE's Grand Challenge experience validated three philosophies: reliance on COTS, multiplicity of use, and the model-build-test cycle. Purchasing COTS equipment and services freed us to concentrate on more challenging and critical areas, such as the installation of a COTS steering and propulsion servo-control system and laser scanners, and the modification of the vehicle's suspension. A disciplined approach to software construction lets good ideas accumulate without fearing that your system is turning into a house of cards. One way to increase confidence in software is to use it for as many purposes as appropriate. For the Grand Challenge, the same body of software was used to operate the robot, run faster-than- and slower-than-real-time laboratory simulations, and study logs recorded in the field. This multiplicity of use exposed problems in the lab before the problems could waste expensive time field testing. The model-build-test cycle accumulates experience in a tangible and persistent way. When a problem is found or a new phenomenon identified, it is first modeled in the simulation environment. With a simulation of the problem or new phenomenon in hand, the body of operational code is adjusted to deal with it. Once proven in simulation, the robot is field tested to evaluate the changes and improvements. These are then fed back into the model. A result of the model-build-test approach is that the model grows in fidelity and becomes a lasting repository of project experience. On the Right Path The lessons MITRE has learned in reaching the finals of the DARPA Grand Challenge are already bearing fruit. By teaming together the Meteor with other autonomous robots (see page 10, "Increasing Mission Footprint through Robot Pairs"), MITRE is designing safer methods for warfighters to pursue their missions. Although the Meteor was stopped short of its goal, MITRE's ingenuity will continue to help our clients reach the finish line in their own robotic endeavors. |
|
||||
| For more information, please contact Ann Jones using the employee directory. Page last updated: August 22, 2007 | Top of page |
|||||
Solutions That Make a Difference.® |
|
|