About Us Our Work Employment News & Events
MITRE Remote Access for MITRE Staff and Partners Site Map
The MITRE Digest

Follow Us:

Visit MITRE on Facebook
Visit MITRE on Twitter
Visit MITRE on Linkedin
Visit MITRE on YouTube
View MITRE's RSS Feeds
View MITRE's Mobile Apps

 

 

Home > News & Events > MITRE Publications > The MITRE Digest >
spacer

Fooled Again? Developing Counter-deception Decision Support


May 2004

illustration showing deception using 3 shells and pea

If you've seen The Sting, The Heist, or The Usual Suspects, you know how difficult it can be to catch a con artist's tricks and lies. Now, envision trying to gather all the information necessary to second-guess a wily enemy, such as a wartime foe. No repeat viewings allowed: everything happens in real time, with real stakes.

Take D-Day, for instance. Using elaborate ruses, the Allies convinced the Germans the invasion would be at Pas de Calais, and that landings at Normandy were a diversion. How could the Germans be fooled? They had extensive networks of intelligence agents, collaborators, signals and aerial intelligence—and good reason to be suspicious, since elaborate deceptions had preceded every previous Allied landing. Yet the Germans, themselves highly skilled in deception, were fooled.

Deception—whether by a con artist or an enemy force—is an everyday occurrence. But while deception is commonplace, systematic methods for spotting it are not, partly because common reasoning strategies aid the deceivers. MITRE researchers Frank Stech and Chris Elsaesser, co-principal investigators of a project called "Counter-deception Decision Support," are attempting to automate deception detection, a process that should help analysts and decision-makers decide if deception is afoot.

Fool Me Once . . .

"Deception" is fooling someone for one's own advantage. "Counter-deception" means knowing how to detect deception. People get deceived mainly because they believe they know what's going on and that they're too smart to be tricked. All of us have biases in our modes of thinking that make us overestimate our capabilities to explain and predict behavior. People who automatically consider the possibility of deception usually appear paranoid. Considering the possibility of deception, however, is the first step to detecting it.

Once you become suspicious that you're being fooled, it can be hard to know or describe exactly why. "Having a hunch" isn't an argument that will persuade the powers-that-be to change a major decision. Nevertheless, there are systematic processes for counter-deception. For example, a "theory of spoof unmasking," based on the work of World War II uber-deception analyst R. V. Jones, helps isolate, clarify, and quantify one's suspicions.

"It's the rare individual who does counter-deception thinking intuitively," says Stech. "That's why we believe there should be decision-support theories, process models, and reasoning support tools. There's too much going on in elaborate deceptions to try doing this in your head without some kind of analytic reasoning help. We can all be fooled, at least some of the time, as P.T. Barnum said."

Connecting the "Dots"

The MITRE team bases much of its work on the Analysis of Competing Hypotheses, or ACH, originally proposed by social scientist Richards J. Heuer, Jr., a long-time Central Intelligence Agency analyst. According to Elsaesser and Stech, ACH outlines ways to consider and analyze inconsistent and atypical information against competing hypotheses about what's really going on (including the possibility of deception), as well as to test those hypotheses systematically.

The problem with ACH is that the typical human doesn't have the capability to consider the amount of evidence that goes into developing and analyzing different hypotheses. Add a deception hypothesis and the alternatives grow exponentially. People overlook deception because they can't possibly manage all the details.

Furthermore, deception detection depends on two reasoning skills in which humans are particularly weak: reasoning about negative or absent evidence (for example, Sherlock Holmes's dog that did not bark in the night) and false evidence ("this agent has been turned and is working for the enemy"). For these reasons, Stech and Elsaesser focus their research on developing computerized tools to help analysts identify deception. (The two make a good team for the task: Stech's background is in psychology and intelligence analysis, while Elsaesser's specialty is artificial intelligence.)

They describe their thinking in terms of bits of evidence or "dots." Dots might be certain people or money or battalions of troops. The team has broken the deception-detection process into four parts that continually loop back on themselves as you gain more information:

  1. Find the "dots." Take the data you've collected and determine whether anything is wrong or out of place. Consider that the evidence may be manipulated. Actively separate the anomalous (unusual or surprising) from the apparently normal dots.
  2. Characterize the dots. Do anomalies represent deception or some other causes? For instance, should that money have moved from one account to another? Why did all those troops suddenly march to the border?
  3. Connect the dots. Now you must come up with reasons why the specific dots raised your suspicions by linking them to different hypotheses. Map which dots support the various hypotheses, how much, and to what extent. Then ask: How surprising is the evidence if the hypothesis is true? If it is false? How surprising would the opposite evidence be in either case? What evidence should we have seen that we haven't seen? What's missing? Is it missing because we failed to collect it or because something is hidden or manipulated? Did we collect this evidence but underestimate its importance? This is the ACH stage—where the team's software tools come into play. Feedback from this review of the evidence and the hypotheses refines steps one and two: new dots become important and new ways to characterize anomalies become apparent.
  4. Present your case to the decision-makers. If you don't make your case carefully, the decision-makers may not change their course of action. Often, the most compelling argument is based on successful predictions, such as: "If there is deception, and our side does X, we can predict their side will do Y, but not Z, allowing us to collect more relevant data, which will reveal the following…." Tools should aid this process of estimating and predicting future relations between the evidence and the hypotheses and of focusing data-collection activities to support such predictions.

"As you start examining how the evidence relates to the various hypotheses, you go back and look for more dots—that's where tools and humans start to come together," Stech says. "You may have to go back and collect more evidence or review and recharacterize the anomalies you have. We want to support decisions to collect or not collect and analyze or not analyze data we might have otherwise overlooked. We want to gather what's critical to test hypotheses and avoid collecting what's going to be immaterial.

"Our methods should help intelligence analysts decide whether one hypothesis or another is most likely, as well as how to support those hypotheses. Just as critically, it shows the analysts what evidence—positive or negative, true or false—is most critical, so analysts can focus their 'spoof unmasking' skills on the most important dots. This dynamic is really the heart of the deception-detection process," he adds.

Tooling Up Against Fooling

By codifying the dots and how they are handled, the team has begun building computer-based tools to model decision support for counter-deception, using a MITRE system called Adversarial Planner. This tool takes elements of a potential deception scheme and generates plans and alternate plans, down to the details needed for detecting deception. The tool makes assumptions about the likelihood of specific scenarios occurring and assigns a probability to them.

"Our first research hypothesis is this: Can we automate the overall framework and general theory?" Elsaesser says. "Our second hypothesis, which will probably be part of follow-up research, is: Can we create interfaces so the tool can be applied to several knowledge domains? We're working on automating the logic of all this with statements such as, 'This event is impossible to happen at the same time as that event.'"

The researchers' goal is to help MITRE sponsors—including those from the military, intelligence community, homeland security, and the IRS—to generate hypotheses and keep track of their theories and evidence. Adversarial Planner is application-independent and can be applied to most domains, from intelligence and military counter-deception work to forensic accounting and fraud detection.

Elsaesser explains, "One of the biggest problems you face is not just uncovering deception, but figuring out what the deceiver is really doing. We want to avoid what's called the '180-degree fallacy'—that is, if they're not doing one thing, they're doing the opposite. Just because an army is not marching south does not mean it's marching north. Just because you can recognize a deception doesn't mean you know what will happen. But it does mean you know what will not happen—the deception, such as the landings at Pas de Calais. Since you've avoided doing what the enemy, terrorist, or con artist wanted, you still come out way ahead."

The Human Factor

"One of the things I think we're adding to the state of the knowledge is very powerful general descriptions of deception and counter-deception, based on fundamental cognitive principles, that seem to apply across domains," Stech adds. "We found this in our research into domains such as forensic accounting and toll-call fraud, and we've also seen the commonalities in military and diplomatic-political deceptions and how they can integrate into this model."

Elsaesser notes, "The key to our tool-building is this: If it's a deception, it's not the adversary's true course of action, so through some logical means we should be able to uncover it. We just hope to be able to uncover it before it becomes obvious and bad things happen."

—by Alison Stern-Dunyak


Page last updated: May 20, 2004 | Top of page

Homeland Security Center Center for Enterprise Modernization Command, Control, Communications and Intelligence Center Center for Advanced Aviation System Development

 
 
 

Solutions That Make a Difference.®
Copyright © 1997-2013, The MITRE Corporation. All rights reserved.
MITRE is a registered trademark of The MITRE Corporation.
Material on this site may be copied and distributed with permission only.

IDG's Computerworld Names MITRE a "Best Place to Work in IT" for Eighth Straight Year The Boston Globe Ranks MITRE Number 6 Top Place to Work Fast Company Names MITRE One of the "World's 50 Most Innovative Companies"
 

Privacy Policy | Contact Us