MITRE's Collaborative Experimentation Environment: Putting Cooperation to the Test

January 2010
Topics: Collaborative Computing, Collaborative Decision Making
MITRE's Collaborative Experimentation Environment helps government agencies test their collaboration policies by staging experiments involving realistic operational data, experienced personnel, and authentic tools and equipment.
hurricane

As a Category Four hurricane gathers its strength off the coast of Texas, federal and state agencies coordinate the launch of unmanned aircraft. The planes will be used to track the storm's path, survey the damage, and locate the stranded survivors the hurricane leaves in its wake. But with FAA regulations limiting the number of unmanned vehicles in the air to one per air traffic control facility, the agencies flying the aircraft and the agencies needing the data from them must work together to prioritize the aircrafts' use. The agencies have policies in place to prioritize their individual needs, but it is unclear whether these policies will work smoothly in the face of the oncoming crisis and the constraints of the airspace.

The implications of poor interagency coordination can be profound. Government investigations of both the 9/11 terrorist attacks and the federal and state responses to Hurricane Katrina point to breakdowns in communication among agencies and uncertainty over jurisdictions as contributing causes to the government inaction that marked those tragedies. The lesson of these investigations is unmistakable: The key to a quick and efficient governmental response to a catastrophe is effective interagency collaboration.

What Makes Good Collaboration?

To promote such cooperation, MITRE has developed a program called the Collaborative Experimentation Environment, or CEE. The program's goal is to help participants better understand the relationships among technologies, procedures, and policies across multiple organizations. CEE conducts experiments using real-world scenarios that mimic true-to-life events so that results are valid, measurable, and practical. The experiments are based on realistic operational data, staffed with experienced personnel drawn from relevant fields, and equipped with authentic tools and equipment.

When choosing which interagency collaborations would benefit most from experimentation, CEE co-directors Zach Furness and Valerie Gawron look for three criteria:

  1. The experiment has to involve multi-agency coordination.
  2. The experiment must require human participation. "If you can get an answer through a digital simulation without a human in the loop, that's not suited to us," says Gawron. "CEE is best suited for testing procedures involving decision-making and planning."
  3. The experiment has to have support from agency leaders to guarantee full participation.

Human-focused, Human-scaled

One of the main features of CEE's experiments is that they provide a better understanding of mission effectiveness by immersing individuals in an environment that approximates conditions they may face in real life. Many problems in mission effectiveness are not necessarily related to problems in the underlying systems, but in human interaction through those systems. In addition, many problems across agencies are due to differences in culture and terminology and have very little to do with the systems in use. By immersing them in a realistic environment, CEE enables participants to identify real-world problems and address them based on quantifiable measurements.

A second CEE feature is that its environment bridges the extremes offered by other experimentation environments. CEE experiments are typically more manageable than large distributed experimental events that involve thousands of individuals and take a year or more to plan. Furthermore, CEE experiments are usually more efficient and less costly to organize since they use existing MITRE experimentation labs that are stocked with resources (operational systems, data collection tools, scenario authoring tools, etc.) already available within the corporation. At the same time, CEE experiments are more comprehensive than small "table-top" seminar games that can be executed rapidly but often lack the necessary fidelity to uncover critical issues.

This Is Not a Drill

Gawron says CEE is designed to keep agencies from falling into the trap of exercising but not experimenting. "The drills that agencies typically run are very scripted. Everyone knows who will do what, when. Participants are expected to perform to the criteria in their policies or procedures, and they're not allowed to fail. But the unique focus that we bring at CEE is on performance metrics. If you use Tool A instead of Tool B, does that help you make your decisions faster, do you make better decisions, are you more efficient, are you more effective, do you coordinate more quickly with fewer errors, do all the people involved in a decision get all the information they need?"

Participants run through CEE experiments without any knowledge of the scenario ahead of time. All communications and interactions between and among the agencies are digitally recorded throughout the experiment. The collected data are then analyzed to see what aspects of the procedures work well and what aspects could work better.

Good Experiments Make Good Policy

CEE has conducted experiments addressing inter-agency responses to a hijacking mimicking the details of 9/11, prioritization of unmanned aircraft systems (UASs) in support of a hurricane response, and border screening during a pandemic flu outbreak. Participants react positively to CEE's "real data, real operators, real systems" philosophy. One participant in the hijacking experiment said of MITRE, "No place else could bring together the people, the real people, who work in the real world that are doing the job in real life." Another participant praised the realism of the event, saying, "MITRE was able to produce that element of tension and pressure, which was extraordinarily realistic."

It's the end results of the experiments, however, that matter most to Gawron. "Every single experiment has led to a change in procedure or policy. At the end of an experiment, participants immediately begin making changes. For example, we ran the hurricane experiment in March. By the June 1st opening of the hurricane season, the participating agencies had new procedures in place improving the effectiveness of UAS missions during a disaster response."

Planning for a Pandemic

During August 2009, MITRE conducted a Pandemic Influenza Experiment (PIE) as part of the CEE. The experiment was intended to examine policies that are currently in place between the U.S., Mexico, and Canada to respond to an influenza outbreak that originates outside North America. These policies call for the screening of every single airplane passenger who flies into the U.S. during a pandemic flu outbreak, but these policies have not yet been tested.

Grace Hwang, one of the MITRE leads for the experiment, describes the scope of the PIE. "We simulated the entire national airspace, as well as the 19 planned airport quarantine stations. Through the experiment, we analyzed the quarantine stations' capacities and studied how screening every single passenger may delay national air traffic."

Ironically, several agencies that planned to participate in an early walkthrough of the experiment could not attend because those agencies were scrambling to respond to the initial outbreaks of H1N1. This only highlighted the importance of the experiment to many of the participants and helped to increase their level of interest as the experiment grew closer.

The Right Tools for the Job

The crux of the experiment was to study how participants' use of collaboration tools affects the effectiveness of the screening program. Passenger screening is actually dependent upon coordination among multiple agencies, including the Federal Aviation Administration (FAA) and the Centers for Disease Control and Prevention (CDC), so the central hypothesis of the experiment was that better information sharing across these agencies would lead to improvements in managing air traffic, passenger flow through airports, and possibly infected ill passengers.

The PIE had an immediate impact on the agencies that participated. The FAA and the CDC implemented changes in the reporting of airline passenger information. Gawron and Furness hope to mirror this success in their future experiments. For 2010 they have planned experiments involving such issues as airspace surveillance and security and cybersecurity, and the MITRE team anticipates these new CEE scenarios will uncover additional ways to augment interagency cooperation.

—by Christopher Lockheardt

Publications

Publication Search