A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences

Gary Goertz & James Mahoney, 2012
Princeton University Press. Available on Amazon

Review of the book by Dan Hirschman

Excerpts from his review:

“Goertz, a political scientist, and Mahoney, a sociologist, attempt to make sense of the different cultures of research in these two camps without attempting to apply the criteria of one to the other. In other words, the goal is to illuminate difference and similarity rather than judge either approach (or, really, affiliated collection of approaches) as deficient by a universal standard.

G&M are interested in quantitative and qualitative approaches to causal explanation.

Onto the meat of the argument. G&M argue that the two cultures of quantitative and (causal) qualitative research differ in how they understand causality, how they use mathematics, how they privilege within-case vs. between-case variation, how they generate counterfactuals, and more. G&M argue, perhaps counter to our expectations, that both cultures have answers to each of these questions, and that the answers are reasonably coherent across cultures, but create tensions when researchers attempt to evaluate each others’ research: we mean different things, we emphasize different sorts of variation, and so on. Each of these differences is captured in a succinct chapter that lays out in incredible clarity the basic choices made by each culture, and how these choices aggregate up to very different models of research.

Perhaps the most counterintuitive, but arguably most rhetorically important, is the assertion that both quant and qual research are tightly linked to mathematics. For quant research, the connection is obvious: quantitative research relies heavily on probability and statistics. Causal explanation consists of statistically identifying the average effect of a treatment. For qual research, the claim is much more controversial. Rather than relying on statistics, G&M assert that qualitative research relies on logic and set theory, even if this reliance is often implicit rather than formal. G&M argue that at the core of explanation in the qualitative culture are the set theoretic/logical criteria of necessary and sufficient causes. Combinations of necessary and sufficient explanations constitute causal explanations. This search for non-trivial necessary and sufficient conditions for the appearance of an outcome shape the choices made in the qualitative culture, just as the search for significant statistical variation shapes quantitative resarch. G&M include a brief review of basic logic, and a quick overview of the fuzzy-set analysis championed by Charles Ragin. I had little prior experience with fuzzy sets (although plenty with formal logic), and I found this chapter extremely compelling and provocative. Qualitative social science works much more often with the notion of partial membership – some countries are not quite democracies, while others are completely democracies, and others are completely not democracies. This fuzzy-set approach highlight the non-linearities inherent in partial membership, as contrasted with quantitative approaches that would tend to treat “degree of democracy” as a smooth variable.”

Earlier paper by same authors available as pdf: A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research
by James Mahoney, Gary Goertz. Political Analysis (2006) 14:227–249 doi:10.1093/pan/mpj017

See also these recent reviews:

See also The Logic of Process Tracing Tests in the Social Sciences by James Mahoney, Sociological Methods & Research, XX(X), 1-28 Published online 2 March 2012

RD comment: This books is recommended reading!

PS 15 February 2013: See Howard White’s new blog posting “Using the causal chain to make sense of the numbers” where he provides examples of the usefulness of simple set-theoretic analyses of the kind described by Mahoney and Goetz (e.g. in an analysis of arguments about why Gore lost to Bush in Florida)


Measuring Empowerment? Ask Them

Quantifying qualitative outcomes from people’s own analysis. Insights for results-based management from the experience of a social movement in Bangladesh Dee Jupp Sohel Ibn Ali with contribution from Carlos Barahona 2010: Sida Studies in Evaluation. Download pdf


Participation has been widely taken up as an essential element of development, but participation for what purpose? Many feel that its acceptance, which has extended to even the most conventional of institutions such as the international development banks, has resulted in it losing its teeth in terms of the original ideology of being able to empower those living in poverty and to challenge power relations.

The more recent emergence of the rights-based approach discourse has the potential to restore the ‘bite’ to participation and to re-politicise development. Enshrined in universal declarations and conventions, it offers a palatable route to accommodating radicalism and creating conditions for emancipatory and transformational change, particularly for people living in poverty. But an internet search on how to measure the impact of these approaches yields a disappointing harvest of experience. There is a proliferation of debate on the origins and processes, the motivations and pitfalls of rights-based programming but little on how to know when or if it works. The discourse is messy and confusing and leads many to hold up their hands in despair and declare that outcomes are intangible, contextual, individual, behavioural, relational and fundamentally un-quantifiable!

As a consequence, results-based management pundits are resorting to substantive measurement of products, services and goods which demonstrate outputs and rely on perception studies to measure outcomes.

However, there is another way. Quantitative analyses of qualitative assessments of outcomes and impacts can be undertaken with relative ease and at low cost. It is possible to measure what many regard as unmeasurable.

This publication suggests that steps in the process of attainment of rights and the process of empowerment are easy to identify and measure for those active in the struggle to achieve them. It is our etic perspectives that make the whole thing difficult. When we apply normative frames of reference, we inevitably impose our values and our notions of democracy and citizen engagement rather than embracing people’s own context-based experience of empowerment.

This paper presents the experience of one social movement in Bangladesh, which managed to find a way to measure empowerment by letting the members themselves explain what benefits they acquired from the Movement and by developing a means to measure change over time. These measures , which are primarily of use to the members, have then been subjected to numerical analysis outside of the village environment to provide convincing quantitative data, which satisfies the demands of results-based management.

The paper is aimed primarily at those who are excited by the possibilities of rights-based approaches but who are concerned about proving that their investment results in measurable and attributable change. The experience described here should build confidence that transparency, rigour and reliability can be assured in community led approaches to monitoring and evaluation without distorting the original purpose, which is a system of reflection for the community members themselves. Hopefully, the reader will feel empowered to challenge the sceptics.

Dee Jupp and Sohel Ibn Ali
Continue reading “Measuring Empowerment? Ask Them”

Are Metrics Blinding Our Perception?

(from New York Times, found by Aldo Benini)


CAMBRIDGE, Massachusetts — The Trixie Telemetry company believes in hard, quantifiable truths. It believes that there is a right time and wrong time to breast-feed a baby. It believes that certain hours and rooms are better for a child’s naps than others and that data can establish this, too. It believes that parents should track how long their infants have gone without soiling a diaper and devote themselves to beating this “high score.”

To these ends, the company sells what is a coveted service in this age: a dashboard. It invites you to enter data on your baby’s life, and it produces color-coded charts, Sleep Probability Distributions, digestive analysis and such, to help parents make data-based decisions.

Don’t laugh, because Trixie Telemetry is made from the essence of our age… >read the rest of the article on the NYT website here<

Quantification of qualitative data in the water sector: The challenges

by Christine Sijbesma and Leonie Postma

Published in Water International, Volume 33, Issue 2, June 2008 pp. 150-161 (Full text >here<)


Participatory methods are increasingly used in water-related development and management. Most information gathered with such methods is qualitative. The general view is that such information cannot be aggregated and is therefore less interesting for managers. This paper shows that the opposite can be the case. It describes a participatory methodology that quantifies qualitative information for management at all levels. The methodology was developed to assess the sustainability of community-managed improved water supplies, sanitation and hygiene. It allows correlation of outcomes to processes of participation and gender and social equity and so assess where changes are needed. The paper also describes how elements of positivistic research such as sampling were included. Application in over 15 countries taught that such quantified qualitative methods are an important supplement to or an alternative for social surveys. While the new approach makes statistical analysis possible, it also increases the risk that participatory methods are used extractively when the demand for data on outcomes dominates over quality of process. As a result the collection of qualitative information and the use of the data for community action and adaptive management gets lost. However, when properly applied, quantification offers interesting new opportunities. It makes participatory methods more attractive to large programmes and gives practitioners and donors a new chance to adopt standards of rigor and ethics and so combine quantification with quality protection and developmental use.

3ie news: Working paper series launched

The first two 3ie working papers are now available:

Working Paper No. 1, Reflections on some current debates in impact evaluation, by Howard White reviews some of the criticisms commonly leveled at quantitative approaches to impact evaluation, arguing that many are based on mis-conceptions; and

Working Paper No. 2, Better Evidence for a Better World, edited by Mark Lipsey and Eamonn Noonan (produced jointly with The Campbell Collaboration) reviews the need for, and uses of, evidence in various fields of social policy.

Quantitative and Qualitative Methods in Impact Evaluation and Measuring Results

Sabine Garbarino and Jeremy Holland, March 2009

Issues paper | Workshop report

There has been a renewed interest in impact evaluation and measuring results in recent years amongst development agencies and donors. This paper reviews the case for promoting and formalising qualitative and combined methods for impact evaluation and measuring results, as part of a broader strategy amongst donors and country partners for tackling the evaluation gap. The accompanying workshop report provides a summary of the January 2009 workshop “Make an Impact: Tackling the “I” and the “D” of Making It Happen”, which aimed to familiarise DFID staff with the use of qualitative methods in impact evaluation and measuring results.

The case for qualitative and combined methods is strong. Qualitative methods have an equal footing in evaluation of development impacts and can generate sophisticated, robust and timely data and analysis. Combining qualitative research with quantitative instruments that have greater breadth of coverage and generalisability can result in better evaluations that make the most of their respective comparative advantages.

Is Empowerment Efficient?: A Data Envelopment Analysis of 260 Local Associations in Bangladesh

“This report presents one of the first formal analyses, outside the microfinance area, of the efficiency (as different from the effectiveness) of a development NGO program. The author [Aldo Benini], who invites comments and suggestions, offers this summary:

“Empowerment, a concept with a successful twenty-century cultural career, has been recognized for its relevance and, increasingly, effectiveness in liberating the poor, both at the individual and local community level. Efforts to create valid measurement tools have advanced, with a focus on causality, thus on effectiveness of empowerment programs. The efficiency of such programs, in other words considerations of optimal resource use, has not been investigated widely, with the exception of microfinance projects. Such programs are sheltered from efficiency pressures by the subsidies of aid chains and by the need to work out, in precarious social environments, organizational arrangements that produce credible empowerment effects in the first place. Continue reading “Is Empowerment Efficient?: A Data Envelopment Analysis of 260 Local Associations in Bangladesh”

%d bloggers like this: