Analytic Rigour in Information Analysis – Lessons from the intelligence community?

This post was prompted by a blog posting by Irene Guijt about a presentation by Michael Patton at a workshop in Wageningen last week (which I also attended). The quotes below come from a webpage about Zelik, Patterson and Woods’ Rigour Attribute Model , which outlines eight attributes of a rigorous process of information analysis, along with guidance on recognising the extent to which each criteria has been met.

The model is summarised in this Analytical Rigor Poster (PDF)

Quotes from the website

“The proliferation of data accessibility has exacerbated the risk of shallowness in information analysis, making it increasingly difficult to tell when analysis is sufficient for making decisions or changing plans, even as it becomes increasingly easy to find seemingly relevant data. In addressing the risk of shallow analysis, the assessment of rigor emerges as an approach for coping with this fundamental uncertainty, motivating the need to better define the concept of analytical rigor.”

“Across information analysis domains, it is often difficult to recognize when analysis is inadequate for a given context. A better understanding of rigor is an analytic broadening check to be leveraged against this uncertainty. The purpose of this research is to refine the understanding of rigor, exploring the concept within the domain of intelligence analysis. Nine professional intelligence analysts participated in a study of how analytic rigor is judged. The results suggest a revised definition of rigor, reframing it as an emergent multi-attribute measure of sufficiency rather than as a measure of process deviation. Based on this insight, a model for assessing rigor was developed, identifying eight attributes of rigorous analysis. Finally, an alternative model of briefing interactions is proposed that integrates this framing of rigor into an applied context. This research, although specific in focus to intel analysis, shows the potential to generalize across forms of information analysis.

The references  provided include:

Zelik, D. J., Patterson, E. S., & Woods, D. D. (2010). Measuring attributes of rigor in information analysis. In E. S. Patterson & J. E. Miller (Eds.), Macrocognition metrics and scenarios: Design and evaluation for real-world teams. Aldershot, UK: Ashgate. (ISBN: 978-0-7546-7578-5) Currently, the best source for a detailed discussion of our ongoing research on analytical rigor is this forthcoming book chapter which proposes rigor as a macrocognitive measure of expert performance.

Zelik, D., Patterson, E. S., & Woods, D. D. (2007, June). Understanding rigor in information analysis. Paper presented at the 8th International Conference on Naturalistic Decision Making, Pacific Grove, CA. (PDF) (VIDEO) This paper, presented at the Eighth International Naturalistic Decision Making Conference, provides a more formal overview of our current research.

Modeling Rigor in Information Analysis: A Metric for Rigor Poster (PDF) This poster provides an overview of the rigor model, identifying the aspects of the attributes that contribute to low, moderate, and high rigor analysis processes. It also overviews the rigor metric as applied to the LNG Scenario study.

Reducing the Risk of Shallow Information Analysis Google TechTalk  David D. Woods’ discussion of our analytical rigor research at a Google TechTalk provides a dynamic presentation of the material. Google TechTalks are designed to disseminate a wide spectrum of views on topics including Current Affairs, Science, Medicine, Engineering, Business, Humanities, Law, Entertainment, and the Arts. This talk was originally recorded on on April 10, 2007.

Can we obtain the required rigour without randomisation? Oxfam GB’s non-experimental Global Performance Framework

Karl Hughes, Claire Hutchings, August 2011. 3ie Working Paper 13. Available as pdf.

[found courtesy of @3ieNews]

Abstract

“Non-governmental organisations (NGOs) operating in the international development sector need credible, reliable feedback on whether their interventions are making a meaningful difference but they struggle with how they can practically access it. Impact evaluation is research and, like all credible research, it takes time, resources, and expertise to do well, and – despite being under increasing pressure – most NGOs are not set up to rigorously evaluate the bulk of their work. Moreover, many in the sector continue to believe that capturing and tracking data on impact/outcome indicators from only the intervention group is sufficient to understand and demonstrate impact. A number of NGOs have even turned to global outcome indicator tracking as a way of responding to the effectiveness challenge. Unfortunately, this strategy is doomed from the start, given that there are typically a myriad of factors that affect outcome level change. Oxfam GB, however, is pursuing an alternative way of operationalising global indicators. Closing and sufficiently mature projects are being randomly selected each year among six indicator categories and then evaluated, including the extent each has promoted change in relation to a particular global outcome indicator. The approach taken differs depending on the nature of the project. Community-based interventions, for instance, are being evaluated by comparing data collected from both intervention and comparison populations, coupled with the application of statistical methods to control for observable differences between them. A qualitative causal inference method known as process tracing, on the other hand, is being used to assess the effectiveness of the organisation’s advocacy and popular mobilisation interventions. However, recognising that such an approach may not be feasible for all organisations, in addition to Oxfam GB’s desire to pursue complementary strategies, this paper also sets out several other realistic options available to NGOs to step up their game in understanding and demonstrating their impact. These include: 1) partnering with research institutions to rigorously evaluate “strategic” interventions; 2) pursuing more evidence informed programming; 3) using what evaluation resources they do have more effectively; and 4) making modest investments in additional impact evaluation capacity.”

Evaluation Revisited – Improving the Quality of Evaluative Practice by Embracing Complexity

Utrecht Conference Report. Irene Guijt, Jan Brouwers, Cecile Kusters, Ester Prins and Bayaz Zeynalova. March 2011. Available as pdf

This report summarises the outline and outputs of the Conference ‘Evaluation Revisited: Improving the Quality of Evaluative Practice by Embracing Complexity’’, which took place on May 20-21, 2010. It also adds additional insights and observations related to the themes of the conference, which emerged in presentations about the conference at specific events.

Contents (109 pages):

1 What is Contested and What is at Stake
1.1 Trends at Loggerheads
1.2 What is at Stake?
1.3 About the May Conference
1.4 About the Report
2 Four Concepts Central to the Conference
2.1 Rigour
2.2 Values
2.3 Standards
2.4 Complexity
3 Three Questions and Three Strategies for Change
3.1 What does ‘evaluative practice that embraces complexity’ mean in practice?
3.2 Trade-offs and their Consequences
3.3 (Re)legitimise Choice for Complexity
4 The Conference Process in a Nutshell

%d bloggers like this: