Imagine you’re the intelligence analyst of the future and you think you’ve just solved a vexing international puzzle for your leaders, when suddenly a gentle voice rises from your computer: “Excuse me Dave, other explanations with more weight are available. Would you like to see them?”
This kind of ghost in the machine is called a software agent. Technologists are almost certain to avoid any similarities in their designs to H.A.L, the all powerful computer in “2001: A Space Odyssey.” But software agents are, in fact, a serious idea for mitigating cognitive biases – the brain’s natural tendency to find shortcuts through complex information.
Intelligence managers took on the bias problem after the Sept. 11 terror attacks, and redoubled their efforts after the wildly wrong 2002 national intelligence estimate about Iraq’s weapons of mass destruction. For the most part, agencies addressed bias by training analysts to use new structured analytic techniques – software is now used to chart and visualize evidence, arguments and logic.
It’s hard to train away all cognitive biases, though, because the human mind innately applies mental shortcuts when faced with big data dumps. The mind falls back easily on previous experiences, for example, which can lead to confirmation bias. Cognitive biases should not be confused with stereotypes or prejudices. They’re much more subtle, cognitive experts say.
Scientists such as those at Raytheon think the time is right for science and technology to play a larger role in combating bias – but with limits. If software agents were to be deployed – one researcher says their use in the community is “incipient” at this point – analysts would have to retain the power to take or leave the computer’s advice.
“The analyst’s reaction to a tool like that would depend on how intrusive it is — whether it’s Big Brother,” says former intelligence analyst James Steiner, who retired from the CIA in 2005 and teaches at Rockefeller College, an arm of the State University of New York at Albany.
This article is based on interviews with cognitive researchers, former analysts and industry experts. It shows that the community has options it has yet to exercise. Software agents are just one. Experts applaud the structured analytics trend but they decry the lack of quantitative evidence about their effectiveness in different scenarios. What if information technology could be applied to monitor analytical strategies and outcomes over time? Some of the structured techniques have yet to be institutionalized out of concern that they could overburden time-pressed analysts who find old-fashioned intuition to be a lot faster.
Raytheon is emerging as the most outspoken advocate of applying information technology and experimentation to the cognitive bias problem. The company wants to monitor the decisions of analysts in lab exercises, during training, and eventually in real time while they’re working. The monitoring could help the community assess which techniques work best, and give analysts instant feedback so they can avoid mistakes, rather than simply leaving an audit trail for investigators.
“Awareness alone isn’t enough to overcome (bias) – this is what research has shown. You have to have some way to intervene when the bias is emerging,” says Raytheon researcher and PhD student Don Kretz, who studies cognitive bias in complex problem solving scenarios.
Kretz’s words echo those in a prepared statement from IARPA, the Intelligence Advanced Research Projects Activity: “Research to date has found that simply knowing about the bias is not sufficient to help the individual avoid the bias.”
IARPA wants to see if specialized video games can improve training.
Raytheon wants to do even more. Installation of software agents is one idea.
Rand’s Gregory Treverton, who oversaw the drafting of national intelligence estimates from 1993 to 19995, says software agents might actually empower analysts if they are implemented in a non-heavy handed way.
“One of the things that people learn best from, the psychologists tell us, is feedback. And the closer it is to the event the better,” he says.
Last June, 27 Raytheon employees, mostly from the engineering ranks, volunteered to participate in a one-day intelligence analysis exercise. The purpose was to show how elaborate games might be used to compare different structured analytic techniques for effectiveness. Raytheon plans to release the paper Nov. 15 at the IEEE Homeland Security Technologies conference in Waltham, Mass.
The company hopes the exercise and a similar one in August will prompt its mission partners – meaning intelligence agencies – to assign working analysts to participate in one or more exercises late this year or early next. Talks are underway, Kretz says.
In the June pilot, the volunteers were given fictional intelligence reports, cell call transcripts, and embassy reports describing three bombings near the Green Zone in Baghdad. The information was drawn from a library of synthetic counterinsurgency, or SYNCOIN, scenarios developed by Penn State University with funds from the Army Research Office. Participants had to figure out who planted the bombs and who the intended targets were.
The game designers made things difficult for the participants by including facts intended to draw out anchoring biases – the tendency of human minds to glom onto evidence supporting an initial hypothesis – and confirmation biases, the tendency to put more weight on evidence that confirms an underlying belief.
The participants were briefed on three techniques for “debiasing” their analysis and given simple software tools. Raytheon calls these techniques analytic multipliers.
Some participants used link analysis software to chart the relationships among people, organizations, objects and events.
Others used an information extraction and weighting technique to assign numbers from 1 to 10 to specific pieces of evidence based on how critical they saw them – 10 being the most critical. The evidence and numbers were typed into spread sheets.
Others used software to draw up matrices documenting the competing explanations and the evidence supporting them. That technique is called Analysis of Competing Hypotheses or ACH.
How did the comparison turn out? The first thing Raytheon wants you to know is that picking a winner was not the main point. The purpose was to develop an experimental protocol to show how the community could, over time, assemble quantitative information to document the effectiveness of different techniques.
For CIA-funded researcher Richards Heuer, it’s about time someone gathered performance data about the structured techniques he advocates: “This is exactly what we need more of – empirical testing and documentation of the effectiveness or the ineffectiveness of various analytic tools and a better understanding of which tools are best used for which type of problem assembled over time,” he said by email.
Raytheon has discussed its research with Heuer, but he is not affiliated with the Raytheon team.
To the degree we should care about the result, it was good news for advocates of ACH, a 30-year old structured technique developed by Heuer.
“The Analysis of Competing Hypotheses technique did in fact improve judgment the most,” says Kretz who is one of the paper’s authors and is scheduled to speak at the IEEE conference.
Heuer came up with ACH in the 1980s after retiring from the CIA and continuing his research as a contractor to the agency. His goal was to improve analysis of the Soviet Union:
“When analysts were asked if they thought the Soviets were deliberately trying to deceive us on a specific point, they would almost invariably say ‘No’, because they didn’t see any evidence of it,” Heuer recalls. “What they didn’t recognize was that if the deception was being done well, they shouldn’t expect to see evidence of it.”
ACH helped the CIA sort deception from reality. For about 15 years, that’s the only reason it was taught and used, says Heuer.
Then came the Sept. 11 attacks and the Iraq weapons fiasco. The agencies took a fresh look at ACH and other structured techniques. In 2010, Heuer and Randolph Pherson authored an unclassified book, “Structured Analytic Techniques for Intelligence Analysis,” which explained how the techniques could help analysts in intelligence, law enforcement and even the business world.
Heuer cautions that ACH is not a “cure-all” for all analysis problems even though it shined in the Raytheon exercise.
Some researchers suspect ACH has the most impact for novice or new analysts, which could explain why it came out on top among engineers pretending to be analysts.
If Raytheon convinces intelligence agencies to play, the June exercise could go down as one of the turning points in the fight against bias. The company insists its advocacy is not about winning the next big contract or selling the agencies more software.
“They’ve got shelves and shelves full of tools that they’ve bought and paid for that they don’t use,” Kretz says. “We’re not trying to develop more stuff like that. We want them to be partners in this.”