“Intelligence is about creating and adjusting stories”

…says Gregory Treverton, in his Prospect article “What should we expect of our spies?” , June 2011

RD comment: How do you assess the performance of intelligence agencies, in the way they collect and make sense of the world around them? How do you explain their failure to predict some of the biggest developments in the last thirty years, including the collapse of the Soviet Union, the failure to find weapons of mass destruction (WMD) in Iraq, and  the contagion effects in the more recent Arab Spring?

The American intelligence agencies described by Treverton struggle to make sense of vast masses of information, much of which is incomplete and ambiguous. Storylines emerge and become dominant, which have some degree of fit with the sorrounding political context. “Questions not asked or stories not imagined by policy are not likely to be developed by intelligence”. Referring to the end of the Soviet Union Treverton identifies two possible counter-measures: “What we could have expected of intelligence was not better prediction but earlier and better monitoring  of internal shortcomings. We could also have expected competing stories to challenge the prevailing one. Very late, in 1990, an NIE, “The deepening crisis in the USSR”, did just that laying our four different scenarious, or stories for the coming year”. ”

Discussing the WMD story, he remarks “the most significant part of the WMD story was what intelligence and policy shared: a deeply held mindset that Saddam must have WMD…In the end if most people believe one thing, arguing for another is hard. There is little pressure to rethink the issue and the few dissenters in intelligence are lost in the wilderness. What should have been expected from intelligence in this case was a section of the assessments asking what was the best case that could be made that Iraq did not have WMD.”

Both sets of suggestions seem to have some relevance to the production of evaluations. Should alternate interpretations be more visible? Should evaluations reports contain their own best counter-arguments (as a a free standing section, not simply as straw men to be dutifuly propped up then knocked down)?

There are also other echoes in Treverton’s paper with the practice and problems of monitoring and evaluating aid interventions. The pressing demand for immediate information, at the expense  of a long term perspective: “We used to do analysis, now we do reporting” says one American analyst. Some  aid agency staff have reported similar problems. Impact evaluations? Yes, that would be good, but in reality we are busy meeting the demand for information about more immediate aspects of performance.

Interesting conclusions as well: “At the NIC, I came to think that, for all the technology, strategic analysis was best done in person. I came to think that our real products weren’t those papers, the NIEs. Rather they were the NIOs, the National Intelligence Officers—the experts, not papers. We all think we can absorb information more efficiently by reading, but my advice to my policy colleagues was to give intelligence officers some face time… In 20 minutes, though, the intelligence officers can sharpen the question, and the policy official can calibrate the expertise of the analyst. In that conversation, intelligence analysts can offer advice; they don’t need to be as tightly restricted as they are on paper by the “thou shalt not traffic in policy” edict. Expectations can be calibrated on both sides of the conversation. And the result might even be better policy.”


This site uses Akismet to reduce spam. Learn how your comment data is processed.