|Home > News & Events > MITRE Publications > The MITRE Digest >|
Fooled Again? Developing Counter-deception Decision Support
If you've seen The Sting, The Heist, or The Usual Suspects, you know how difficult it can be to catch a con artist's tricks and lies. Now, envision trying to gather all the information necessary to second-guess a wily enemy, such as a wartime foe. No repeat viewings allowed: everything happens in real time, with real stakes.
Take D-Day, for instance. Using elaborate ruses, the Allies convinced the Germans the invasion would be at Pas de Calais, and that landings at Normandy were a diversion. How could the Germans be fooled? They had extensive networks of intelligence agents, collaborators, signals and aerial intelligence—and good reason to be suspicious, since elaborate deceptions had preceded every previous Allied landing. Yet the Germans, themselves highly skilled in deception, were fooled.
Deception—whether by a con artist or an enemy force—is an everyday occurrence. But while deception is commonplace, systematic methods for spotting it are not, partly because common reasoning strategies aid the deceivers. MITRE researchers Frank Stech and Chris Elsaesser, co-principal investigators of a project called "Counter-deception Decision Support," are attempting to automate deception detection, a process that should help analysts and decision-makers decide if deception is afoot.
Fool Me Once . . .
"Deception" is fooling someone for one's own advantage. "Counter-deception" means knowing how to detect deception. People get deceived mainly because they believe they know what's going on and that they're too smart to be tricked. All of us have biases in our modes of thinking that make us overestimate our capabilities to explain and predict behavior. People who automatically consider the possibility of deception usually appear paranoid. Considering the possibility of deception, however, is the first step to detecting it.
Once you become suspicious that you're being fooled, it can be hard to know or describe exactly why. "Having a hunch" isn't an argument that will persuade the powers-that-be to change a major decision. Nevertheless, there are systematic processes for counter-deception. For example, a "theory of spoof unmasking," based on the work of World War II uber-deception analyst R. V. Jones, helps isolate, clarify, and quantify one's suspicions.
"It's the rare individual who does counter-deception thinking intuitively," says Stech. "That's why we believe there should be decision-support theories, process models, and reasoning support tools. There's too much going on in elaborate deceptions to try doing this in your head without some kind of analytic reasoning help. We can all be fooled, at least some of the time, as P.T. Barnum said."
Connecting the "Dots"
The MITRE team bases much of its work on the Analysis of Competing Hypotheses, or ACH, originally proposed by social scientist Richards J. Heuer, Jr., a long-time Central Intelligence Agency analyst. According to Elsaesser and Stech, ACH outlines ways to consider and analyze inconsistent and atypical information against competing hypotheses about what's really going on (including the possibility of deception), as well as to test those hypotheses systematically.
The problem with ACH is that the typical human doesn't have the capability to consider the amount of evidence that goes into developing and analyzing different hypotheses. Add a deception hypothesis and the alternatives grow exponentially. People overlook deception because they can't possibly manage all the details.
Furthermore, deception detection depends on two reasoning skills in which humans are particularly weak: reasoning about negative or absent evidence (for example, Sherlock Holmes's dog that did not bark in the night) and false evidence ("this agent has been turned and is working for the enemy"). For these reasons, Stech and Elsaesser focus their research on developing computerized tools to help analysts identify deception. (The two make a good team for the task: Stech's background is in psychology and intelligence analysis, while Elsaesser's specialty is artificial intelligence.)
They describe their thinking in terms of bits of evidence or "dots." Dots might be certain people or money or battalions of troops. The team has broken the deception-detection process into four parts that continually loop back on themselves as you gain more information:
"As you start examining how the evidence relates to the various hypotheses, you go back and look for more dots—that's where tools and humans start to come together," Stech says. "You may have to go back and collect more evidence or review and recharacterize the anomalies you have. We want to support decisions to collect or not collect and analyze or not analyze data we might have otherwise overlooked. We want to gather what's critical to test hypotheses and avoid collecting what's going to be immaterial.
"Our methods should help intelligence analysts decide whether one hypothesis or another is most likely, as well as how to support those hypotheses. Just as critically, it shows the analysts what evidence—positive or negative, true or false—is most critical, so analysts can focus their 'spoof unmasking' skills on the most important dots. This dynamic is really the heart of the deception-detection process," he adds.
Tooling Up Against Fooling
By codifying the dots and how they are handled, the team has begun building computer-based tools to model decision support for counter-deception, using a MITRE system called Adversarial Planner. This tool takes elements of a potential deception scheme and generates plans and alternate plans, down to the details needed for detecting deception. The tool makes assumptions about the likelihood of specific scenarios occurring and assigns a probability to them.
"Our first research hypothesis is this: Can we automate the overall framework and general theory?" Elsaesser says. "Our second hypothesis, which will probably be part of follow-up research, is: Can we create interfaces so the tool can be applied to several knowledge domains? We're working on automating the logic of all this with statements such as, 'This event is impossible to happen at the same time as that event.'"
The researchers' goal is to help MITRE sponsors—including those from the military, intelligence community, homeland security, and the IRS—to generate hypotheses and keep track of their theories and evidence. Adversarial Planner is application-independent and can be applied to most domains, from intelligence and military counter-deception work to forensic accounting and fraud detection.
Elsaesser explains, "One of the biggest problems you face is not just uncovering deception, but figuring out what the deceiver is really doing. We want to avoid what's called the '180-degree fallacy'—that is, if they're not doing one thing, they're doing the opposite. Just because an army is not marching south does not mean it's marching north. Just because you can recognize a deception doesn't mean you know what will happen. But it does mean you know what will not happen—the deception, such as the landings at Pas de Calais. Since you've avoided doing what the enemy, terrorist, or con artist wanted, you still come out way ahead."
The Human Factor
"One of the things I think we're adding to the state of the knowledge is very powerful general descriptions of deception and counter-deception, based on fundamental cognitive principles, that seem to apply across domains," Stech adds. "We found this in our research into domains such as forensic accounting and toll-call fraud, and we've also seen the commonalities in military and diplomatic-political deceptions and how they can integrate into this model."
Elsaesser notes, "The key to our tool-building is this: If it's a deception, it's not the adversary's true course of action, so through some logical means we should be able to uncover it. We just hope to be able to uncover it before it becomes obvious and bad things happen."
—by Alison Stern-Dunyak
Page last updated: May 20, 2004 | Top of page
Solutions That Make a Difference.®