Case-Selection [for case studies]: A Diversity of Methods and Criteria

Gerring, J., Cojocaru, L., 2015. Case-Selection: A Diversity of Methods and Criteria. January 2015 Available as pdf

Excerpt: “Case-selection plays a pivotal role in case study research. This is widely acknowledged, and is implicit in the practice of describing case studies by their method of selection – typical, deviant, crucial, and so forth. It is also evident in the centrality of case-selection in methodological work on the case study, as witnessed by this symposium. By contrast, in large-N cross-case research one would never describe a study solely by its method of sampling. Likewise, sampling occupies a specialized methodological niche within the literature and is not front-and-center in current methodological debates. The reasons for this contrast are revealing and provide a fitting entrée to our subject.

First, there is relatively little variation in methods of sample construction for cross-case research. Most samples are randomly sampled from a known population or are convenience samples, employing all the data on the subject that is available. By contrast, there are myriad approaches to case-selection in case study research, and they are quite disparate, offering many opportunities for researcher bias in the selection of cases (“cherry-picking”).

Second, there is little methodological debate about the proper way to construct a sample in cross-case research. Random sampling is the gold standard and departures from this standard are recognized as inferior. By contrast, in case study research there is no consensus about how best to choose a case, or a small set of cases, for intensive study.

Third, the construction of a sample and the analysis of that sample are clearly delineated, sequential tasks in cross-case research. By contrast, in case study research they blend into one another. Choosing a case often implies a method of analysis, and the method of analysis may drive the selection of cases.

Fourth, because cross-case research encompasses a large sample – drawn randomly or incorporating as much evidence as is available – its findings are less likely to be driven by the composition of the sample. By contrast, in case study research the choice of a case will very likely determine the substantive findings of the case study.

Fifth, because cross-case research encompasses a large sample claims to external validity are fairly easy to evaluate, even if the sample is not drawn randomly from a well-defined population. By contrast, in case study research it is often difficult to say what a chosen case is a case of – referred to as a problem of “casing.”

Finally, taking its cue from experimental research, methodological discussion of cross-case research tends to focus on issues of internal validity, rendering the problem of case-selection less relevant. Researchers want to know whether a study is true for the studied sample. By contrast, methodological discussion of case study research tends to focus on issues of external validity. This could be a product of the difficulty of assessing case study evidence, which tends to demand a great deal of highly specialized subject expertise and usually does not draw on formal methods of analysis that would be easy for an outsider to assess. In any case, the effect is to further accentuate the role of case-selection. Rather than asking whether the case is correctly analyzed readers want to know whether the results are generalizable, and this leads back to the question of case-selection.”

Other recent papers on case selection methods:

Herron, M.C., Quinn, K.M., 2014. A Careful Look at Modern Case Selection Methods. Sociological Methods & Research
 Nielsen, R.A., 2014. Case Selection via Matching. http://www.mit.edu/~rnielsen/Case%20Selection%20via%20Matching.pdf

Multiple Pathways to Policy Impact: Testing an Uptake Theory with QCA

by Barbara Befani, IDS Centre for Development Impact, PRACTICE PAPER. Number 05 October 2013. Available as pdf

Abstract: Policy impact is a complex process influenced by multiple factors. An intermediate step in this process is policy uptake, or the adoption of measures by policymakers that reflect research findings and recommendations. The path to policy uptake often involves activism, lobbying and advocacy work by civil society organisations, so an earlier intermediate step could be termed ‘advocacy uptake’; which would be the use of research findings and recommendations by Civil Society Organisations (CSOs) in their efforts to influence government policy. This CDI Practice Paper by Barbara Befani proposes a ‘broad-brush’ theory of policy uptake (more precisely of ‘advocacy uptake’) and then tests it using two methods: (1) a type of statistical analysis and (2) a variant of Qualitative Comparative Analysis (QCA). The pros and cons of both families of methods are discussed in this paper, which shows that QCA offers the power of generalisation whilst also capturing some of the complexity of middle-range explanation. A limited number of pathways to uptake are identified, which are at the same time moderately sophisticated (considering combinations of causal factors rather than additions) and cover a medium number of cases (40), allowing a moderate degree of generalisation. – See more at: http://www.ids.ac.uk/publication/multiple-pathways-to-policy-impact-testing-an-uptake-theory-with-qca#sthash.HEg4Smra.dpuf

Rick Davies comment: What I  like about this paper is the way it shows, quite simply, how measurements of the contribution of different possible causal conditions in terms of averages, and correlations between these, can be uniformative and even misleading. In contrast, a QCA analysis of the different configurations of causal conditions can be much more enlightening and easier to relate to what are often complex realities in the ground.

I have taken the liberty of re-analysing the fictional data set provided in the annex, using a Decision Tree software (within RapidMiner). This is a means of triangulating the results of QCA analyses. It uses the same kind of data set and produces results which are comparable in structure, but the method of analysis is different. Shown below is a Decision Tree representing seven configurations of conditions that can be found in Befani’s data set of 40 cases. It makes use of 4 of the five conditions described in the paper. These are shown as nodes in the tree diagram.

Befani 2013 10(click on image to enlarge and get a clearer image!)

The 0 and 1 values on the various branches indicate whether the condition immediately above is present or not. The first configuration on the left says that if there is no ACCESS then research UPTAKE (12 cases at the red leaf) does not take place. This is a statement of a sufficient cause. The branch on the right, represents a configuration of three conditions, which says that where ACCESS to research is present, and recommendations are consistent with measures previously (PREV) recommended by the organisation, and where the research findings are disseminated within the organisation by a local ‘champion (CHAMP) then research UPTAKE  (8 cases at the blue leaf) does take place.

Overall the findings shown in the Decision Tree model are consistent with the QCA analyses in terms of the number of configurations (seven) and the configurations that are associated with the largest number of cases (i.e. their coverage). However there were small differences in descriptions of two sets of cases where there was no uptake (red leaves). In the third branch (configuration) from the left above, the QCA analysis indicated that it was the presence of INTERNAL CONFLICT (different approaches to the same policy problem within the organisation) that played a role, rather than the presence of a (perhaps ineffectual) CHAMPION. In the third branch (configuration) from the right the QCA analysis proposed a fourth necessary condition (QUALITY), in addtion to the three shown in the Decision Tree. Here the Decision Tree seems the more parsimonious solution. However, in both sets of cases where differences in findings have occured it would make most sense to then proceed with within-case investigations of the causal processes at work.

PS: Here is the dataset, in case anyone wants to play with it

Real Time Monitoring for the Most Vulnerable

.
.
Greeley, M., Lucas, H. and Chai, J. IDS Bulletin 44.2
Editor Greeley, M. Lucas, H. and Chai, J. Publisher IDS

Purchase a print copy here.

View abstracts online and subscribe to the IDS Bulletin.

Growth in the use of real time digital information for monitoring has been rapid in developing countries across all the social sectors, and in the health sector has been remarkable. Commonly these Real Time Monitoring (RTM) initiatives involve partnerships between the state, civil society, donors and the private sector. There are differences between partners in understanding of objectives,and divergence occurs due to adoption of specific technology-driven approaches and because profit-making is sometimes part of the equation.

With the swarming, especially of pilot mHealth initiatives, in many countries there is risk of chaotic disconnects, of confrontation between rights and profits, and ofoverall failure to encourage appropriate alliances to build sustainable and effective national RTM systems. What is needed is a country-led process for strengthening the quality and equity sensitivity of real-time monitoring initiatives. We propose the development of an effective learning and action agenda centred on the adoption of common standards.

IDS, commissioned and guided by UNICEF Division of Policy and Strategy, has carriedout a multi-country assessment of initiatives that collect high frequency and/or time-sensitive data on risk, vulnerability and access to services among vulnerable children and populations and on the stability and security of livelihoods affected by shocks. The study, entitled Real Time Monitoring for the Most Vulnerable (RTMMV), began with a desk review of existing RTMinitiatives and was followed up with seven country studies (Bangladesh, Brazil,Romania, Senegal, Uganda, Vietnam and Yemen) that further explored and assessed promising initiatives through field-based review and interactive stakeholder workshops. This IDS Bulletin brings together key findings from this research.”

See full list of papers on this topic at the IDS Bulletin  http://www.ids.ac.uk/publication/real-time-monitoring-for-the-most-vulnerable

Special Issue on Systematic Reviews – J. of Development Effectiveness

Volume 4, Issue 3, 2012

  • Why do we care about evidence synthesis? An introduction to the special issue on systematic reviews
  • How to do a good systematic review of effects in international development: a tool kit
    • Hugh Waddington, Howard White, Birte Snilstveit, Jorge Garcia Hombrados, Martina Vojtkova, Philip Davies, Ami Bhavsar, John Eyers, Tracey Perez Koehlmoos, Mark Petticrew, Jeffrey C. Valentine & Peter Tugwell  pages 359-387Download full text
  • Systematic reviews: from ‘bare bones’ reviews to policy relevance
  • Narrative approaches to systematic review and synthesis of evidence for international development policy and practice
  • Purity or pragmatism? Reflecting on the use of systematic review methodology in development
  • The benefits and challenges of using systematic reviews in international development research
    • Richard Mallett, Jessica Hagen-Zanker, Rachel Slater & Maren Duvendack pages 445-455 Download full text
  • Assessing ‘what works’ in international development: meta-analysis for sophisticated dummies
    • Maren Duvendack, Jorge Garcia Hombrados, Richard Palmer-Jones & Hugh Waddington pages 456-471Download full text
  • The impact of daycare programmes on child health, nutrition and development in developing countries: a systematic review

A move to more systematic and transparent approaches in qualitative evidence synthesis

An update on a review of published papers.
By Karin Hannes and Kirsten Macaitis  Qualitative Research 2012 12: 402 originally published online 11 May 2012

Abstract

In 2007, the journal Qualitative Research published a review on qualitative evidence syntheses conducted between 1988 and 2004. It reported on the lack of explicit detail regarding methods for searching, appraisal and synthesis, and a lack of emerging consensus on these issues. We present an update of this review for the period 2005–8. Not only has the amount of published qualitative evidence syntheses doubled, but authors have also become more transparent about their searching and critical appraisal procedures. Nevertheless, for the synthesis component of the qualitative reviews, a black box remains between what people claim to use as a synthesis approach and what is actually done in practice. A detailed evaluation of how well authors master their chosen approach could provide important information for developers of particular methods, who seem to succeed in playing the game according to the rules. Clear methodological instructions need to be developed to assist others in applying these synthesis methods.

New Directions for Evaluation: Promoting Valuation in the Public Interest: Informing Policies for Judging Value in Evaluation

Spring 2012, Volume 2012, Issue 133, Pages 1–129 Buy here

Editor’s Notes – George Julnes

  1. Editor’s notes (pages 1–2) Abstract PDF(22K)

Research Articles

  1. Managing valuation (pages 3–15)  George JulnesAbstract PDF(77K) References
    >
  2. The logic of valuing (pages 17–28)  Michael Scriven Abstract  PDF(63K) References
  3. The evaluator’s role in valuing: Who and with whom (pages 29–41)Marvin C. Alkin, Anne T. Vo and Christina A. Christie Abstract PDF(74K) References
  4. Step arounds for common pitfalls when valuing resources used versus resources produced (pages 43–52)Brian T. Yates Abstract PDF(60K) References
  5. When one must go: The Canadian experience with strategic review and judging program value (pages 65–75)François Dumaine Abstract  PDF(59K) References
  6. Valuing, evaluation methods, and the politicization of the evaluation process (pages 77–83)Eleanor Chelimsky Abstract PDF(46K) References
  7. Valuation and the American Evaluation Association: Helping 100 flowers bloom, or at least be understood? (pages 85–90)Michael Morris Abstract PDF(40K) References

“Six Years of Lessons Learned in Monitoring and Evaluating Online Discussion Forums”

by Megan Avila, Kavitha Nallathambi, Catherine Richey, Lisa Mwaikambo– in Knowledge Management & E-Learning: An International Journal (KM&EL), Vol 3, No 4 (2011)

….which looks at how to evaluate virtual discussion forums held on the IBP (Implementing Best Practices in Reproductive Health) Knowledge Gateway – a platform for global health practitioners to exchange evidence-based information and knowledge to inform practice. Available as pdf  Found courtesy of Yaso Kunaratnam, IDS

Abstract: “This paper presents the plan for evaluating virtual discussion forums held on the Implementing Best Practices in Reproductive Health (IBP) Knowledge Gateway, and its evolution over six years. Since 2005, the World Health Organization Department of Reproductive Health and Research (WHO/RHR), the Knowledge for Health (K4Health) Project based at Johns Hopkins Bloomberg School of Public Health’s Center for Communication Programs (JHU?CCP), and partners of the IBP Initiative have supported more than 50 virtual discussion forums on the IBP Knowledge Gateway. These discussions have provided global health practitioners with a platform to exchange evidence-based information and knowledge with colleagues working around the world. In this paper, the authors discuss challenges related to evaluating virtual discussions and present their evaluation plan for virtual discussions. The evaluation plan included the following three stages: (I) determining value of the discussion forums, (II) in-depth exploration of the data, and (III) reflection and next steps and was guided by the “Conceptual Framework for Monitoring and Evaluating Health Information Products and Services” which was published as part of the Guide to Monitoring and Evaluation of Health Information Products and Services. An analysis of data from 26 forums is presented and discussed in light of this framework. The paper also includes next steps for improving the evaluation of future virtual discussions.”

 

What shapes research impact on policy?

…Understanding research uptake in sexual and reproductive health policy processes in resource poor contexts

Andy Sumner, Jo Crichton, Sally Theobald, Eliya Zulu and Justin Parkhurst. Health Research Policy and Systems 2011, 9(Suppl 1):S3 Published: 16 June 2011

Abstract “Assessing the impact that research evidence has on policy is complex. It involves consideration of conceptual issues of what determines research impact and policy change. There are also a range of methodological issues relating to the question of attribution and the counter-factual. The dynamics of SRH, HIV and AIDS, like many policy arenas, are partly generic and partly issue- and context-specific. Against this background, this article reviews some of the main conceptualisations of research impact on policy, including generic determinants of research impact identified across a range of settings, as well as the specificities of SRH in particular. We find that there is scope for greater cross-fertilisation of concepts, models and experiences between public health researchers and political scientists working in international development and research impact evaluation. We identify aspects of the policy landscape and drivers of policy change commonly occurring across multiple sectors and studies to create a framework that researchers can use to examine the influences on research uptake in specific settings, in order to guide attempts to ensure uptake of their findings. This framework has the advantage that distinguishes between pre-existing factors influencing uptake and the ways in which researchers can actively influence the policy landscape and promote research uptake through their policy engagement actions and strategies. We apply this framework to examples from the case study papers in this supplement, with specific discussion about the dynamics of SRH policy processes in resource poor contexts. We conclude by highlighting the need for continued multi-sectoral work on understanding and measuring research uptake and for prospective approaches to receive greater attention from policy analysts.”

%d bloggers like this: