What Causes What & Hypothesis testing: Truth and Evidence

Two very useful chapters in Denise Cummins (2012) “Good Thinking“, Cambridge University Press

Cummins is a professor of psychology and philosophy, both of which she brings to bear in this great book. Read an interview with author here

Contents include:

1. Introduction
2. Rational choice: choosing what is most likely to give you what you want
3. Game theory: when you’re not the only one choosing
4. Moral decision-making: how we tell right from wrong
5. The game of logic
6. What causes what?
7. Hypothesis testing: truth and evidence
8. Problem solving: another way of getting what you want
9. Analogy: this is like that.

New Directions for Evaluation: Promoting Valuation in the Public Interest: Informing Policies for Judging Value in Evaluation

Spring 2012, Volume 2012, Issue 133, Pages 1–129 Buy here

Editor’s Notes – George Julnes

  1. Editor’s notes (pages 1–2) Abstract PDF(22K)

Research Articles

  1. Managing valuation (pages 3–15)  George JulnesAbstract PDF(77K) References
    >
  2. The logic of valuing (pages 17–28)  Michael Scriven Abstract  PDF(63K) References
  3. The evaluator’s role in valuing: Who and with whom (pages 29–41)Marvin C. Alkin, Anne T. Vo and Christina A. Christie Abstract PDF(74K) References
  4. Step arounds for common pitfalls when valuing resources used versus resources produced (pages 43–52)Brian T. Yates Abstract PDF(60K) References
  5. When one must go: The Canadian experience with strategic review and judging program value (pages 65–75)François Dumaine Abstract  PDF(59K) References
  6. Valuing, evaluation methods, and the politicization of the evaluation process (pages 77–83)Eleanor Chelimsky Abstract PDF(46K) References
  7. Valuation and the American Evaluation Association: Helping 100 flowers bloom, or at least be understood? (pages 85–90)Michael Morris Abstract PDF(40K) References

Integrated Monitoring: A Practical Manual for Organisations That Want to Achieve Results

Written by Sonia Herrero, InProgress, Berlin, April 2012. 43 pages Available as pdf

“The aim of this manual is to help those working in the non-profit sector — non-governmental organisations (NGOs) and other civil society organisations (CSOs) — and the donors which fund them, to observe more accurately what they are achieving through their efforts and to ensure  that they make a positive difference in the lives of the people they want to help. Our interest in writing this guide has grown out of the desire to help bring some conceptual clarity to
the concepts of monitoring and to determine ways in which they can be harnessed and used more effectively by non-profit practitioners.

The goal is to help organisations build monitoring and evaluation into all your project management efforts. We want to demystify the monitoring process and make it as simple and accessible as possible. We have made a conscious choice to avoid technical language, and instead use images and analogies that are easier to grasp. There is a glossary at the end of the manual which contains the definitions of any terms you may be unfamiliar with. This manual is organised into two parts. The first section  covers the ‘what’ and ‘why’ of monitoring and  evaluation; the second addresses how to do it.”

These materials may be freely used and copied by non-profit organisations for capacity building purposes, provided that inProgress and authorship are acknowledged. They may not be reproduced for commercial gain.

Contents
Introduction
I. KEY ASPECTS OF MONITORING
1. What is Monitoring?
2. Why Do We Monitor and For Whom?
3. Who is Involved?
4. How Does it Work?
5. When Do We Monitor?
5. What Do We Monitor?
5.1 Monitoring What We DoII. HOW DO WE MONITOR?
1. Steps for Setting Up a Monitoring S   2. How to Monitor the Process and the Outputs
3. How to Monitor the Achievemen 3.1 Define Results/Outcomes
3.2 Define Indicators for Results
4. Prepare a Detailed Monitoring Plan
5. Identify Sources of Information
6. Data Collection
6.1 Tools for Data Compilation
7. Reflection and Analysis
7.1 Documenting and Sharing
8. Learning and Reviewing
8.1 Learning
8.2 Reviewing
9. Evaluation
Conclusion
Glossary
References

Magenta Book – HM Treasury guidance on evaluation for Central Government (UK)

27 April 2011

“The Magenta Book is HM Treasury guidance on evaluation for Central Government, but will also be useful for all policy makers, including in local government, charities and the voluntary sectors. It sets out the key issues to consider when designing and managing evaluations, and the presentation and interpretation of evaluation results. It describes why thinking about evaluation before and during the policy design phase can help to improve the quality of evaluation results without needing to hinder the policy process.

The book is divided into two parts.

Part A is designed for policy makers. It sets out what evaluation is, and what the benefits of good evaluation are. It explains in simple terms the requirements for good evaluation, and some straightforward steps that policy makers can take to make a good evaluation of their intervention more feasible.

Part B is more technical, and is aimed at analysts and interested policy makers. It discusses in more detail the key steps to follow when planning and undertaking an evaluation and how to answer evaluation research questions using different evaluation research designs. It also discusses approaches to the interpretation and assimilation of evaluation evidence.

The Magenta Book will be supported by a wide range of forthcoming supplementary guidance containing more detailed guidance on particular issues, such as statistical analysis and sampling. Until these are available please refer to the relevant chapters of the original Magenta Book.

The Magenta Book is available for download in PDF format:

An introduction to systematic reviews

Book publishedin March 2012, by Sage. Authors: David Gough, Sandy Oliver, James Thomas

Read Chapter One pdf: Introducing systematic reviews

Contents:

1. Introducing Systematic Reviews David Gough, Sandy Oliver and James Thomas
2. Stakeholder Perspectives and Participation in Reviews Rebecca Rees and Sandy Oliver
3. Commonality and Diversity in Reviews David Gough and James Thomas
4. Getting Started with a Review Sandy Oliver, Kelly Dickson, and Mark Newman
5. Information Management in Reviews Jeff Brunton and James Thomas
6. Finding Relevant Studies Ginny Brunton, Claire Stansfield & James Thomas
7. Describing and Analysing Studies Sandy Oliver and Katy Sutcliffe
8. Quality and Relevance Appraisal Angela Harden and David Gough
9. Synthesis: Combining results systematically and appropriately James Thomas, Angela Harden and Mark Newman
10. Making a Difference with Systematic Reviews Ruth Stewart and Sandy Oliver
11. Moving Forward David Gough, Sandy Oliver and James Thomas

“Six Years of Lessons Learned in Monitoring and Evaluating Online Discussion Forums”

by Megan Avila, Kavitha Nallathambi, Catherine Richey, Lisa Mwaikambo– in Knowledge Management & E-Learning: An International Journal (KM&EL), Vol 3, No 4 (2011)

….which looks at how to evaluate virtual discussion forums held on the IBP (Implementing Best Practices in Reproductive Health) Knowledge Gateway – a platform for global health practitioners to exchange evidence-based information and knowledge to inform practice. Available as pdf  Found courtesy of Yaso Kunaratnam, IDS

Abstract: “This paper presents the plan for evaluating virtual discussion forums held on the Implementing Best Practices in Reproductive Health (IBP) Knowledge Gateway, and its evolution over six years. Since 2005, the World Health Organization Department of Reproductive Health and Research (WHO/RHR), the Knowledge for Health (K4Health) Project based at Johns Hopkins Bloomberg School of Public Health’s Center for Communication Programs (JHU?CCP), and partners of the IBP Initiative have supported more than 50 virtual discussion forums on the IBP Knowledge Gateway. These discussions have provided global health practitioners with a platform to exchange evidence-based information and knowledge with colleagues working around the world. In this paper, the authors discuss challenges related to evaluating virtual discussions and present their evaluation plan for virtual discussions. The evaluation plan included the following three stages: (I) determining value of the discussion forums, (II) in-depth exploration of the data, and (III) reflection and next steps and was guided by the “Conceptual Framework for Monitoring and Evaluating Health Information Products and Services” which was published as part of the Guide to Monitoring and Evaluation of Health Information Products and Services. An analysis of data from 26 forums is presented and discussed in light of this framework. The paper also includes next steps for improving the evaluation of future virtual discussions.”

 

“Unleashing the potential of AusAID’s performance data”

A posting on the Development Policy Blog by Stephen Howes, on 15 february 2012.

This blog examines AusAID’s Office of Development Effectiveness latest annual report released just before Christmas 2010, which was published in two parts, one providing an international comparative perspective (and summarized in this blog), the other drawing on and assessing internal performance reporting. In this blog the author continues his analysis of the  “internal assessment” report.

He points out how the report data shows that poor performance is a much more significant problem than outright fraud. He also examines the results of ODE’s spotchecks on the quality of the self-assessment ratings. There is much else there in the blog that is also of interest.

Of special interest are the concluding paras: “This systematic collation of project self-ratings and the regular use of spot checks is best practice for any aid agency, and something AusAID should take pride in. The problem is that, as illustrated above, the reporting and analysis of these two rich sources of data is at the current time hardly even scratching the surface of their potential.

One way forward would be for ODE or some other part of AusAID to undertake and publish a more comprehensive report and analysis of this data. That would be a good idea, both to improve aid effectiveness and to enhance accountability.

But I have another suggestion. If the data is made public, we can all do our own analysis. This would tremendously enhance the debate in Australia on aid effectiveness, and take the attention away from red-herrings such as fraud towards real challenges such as  value-for-money.

AusAID’s newly-released Transparency Charter[pdf] commits the organization to releasing publishing “detailed information on AusAID’s work” including “the results of Australian aid activities and our evaluations and research.”  The annual release of both the self-ratings and the spot-checks would be a simple step, but one which would go a long way to fulfilling  the Charter’s commitments.”

PS: Readers may be interested in similar data made available by DFID in recent years. See Do we need a minimum level of failure blog posting

 

What shapes research impact on policy?

…Understanding research uptake in sexual and reproductive health policy processes in resource poor contexts

Andy Sumner, Jo Crichton, Sally Theobald, Eliya Zulu and Justin Parkhurst. Health Research Policy and Systems 2011, 9(Suppl 1):S3 Published: 16 June 2011

Abstract “Assessing the impact that research evidence has on policy is complex. It involves consideration of conceptual issues of what determines research impact and policy change. There are also a range of methodological issues relating to the question of attribution and the counter-factual. The dynamics of SRH, HIV and AIDS, like many policy arenas, are partly generic and partly issue- and context-specific. Against this background, this article reviews some of the main conceptualisations of research impact on policy, including generic determinants of research impact identified across a range of settings, as well as the specificities of SRH in particular. We find that there is scope for greater cross-fertilisation of concepts, models and experiences between public health researchers and political scientists working in international development and research impact evaluation. We identify aspects of the policy landscape and drivers of policy change commonly occurring across multiple sectors and studies to create a framework that researchers can use to examine the influences on research uptake in specific settings, in order to guide attempts to ensure uptake of their findings. This framework has the advantage that distinguishes between pre-existing factors influencing uptake and the ways in which researchers can actively influence the policy landscape and promote research uptake through their policy engagement actions and strategies. We apply this framework to examples from the case study papers in this supplement, with specific discussion about the dynamics of SRH policy processes in resource poor contexts. We conclude by highlighting the need for continued multi-sectoral work on understanding and measuring research uptake and for prospective approaches to receive greater attention from policy analysts.”

Social Psychology and Evaluation

by Melvin M. Mark PhD (Editor), Stewart I. Donaldson PhD (Editor), Bernadette Campbell PhD (Editor) Guilford Press, May 2011. Available on Google Books.
Book burb “This compelling work brings together leading social psychologists and evaluators to explore the intersection of these two fields and how their theory, practices, and research findings can enhance each other. An ideal professional reference or student text, the book examines how social psychological knowledge can serve as the basis for theory-driven evaluation; facilitate more effective partnerships with stakeholders and policymakers; and help evaluators ask more effective questions about behavior. Also identified are ways in which real-world evaluation findings can identify gaps in social psychological theory and test and improve the validity of social psychological findings–for example, in the areas of cooperation, competition, and intergroup relations. The volume includes a useful glossary of both fields’ terms and offers practical suggestions for fostering cross-fertilization in research, graduate training, and employment opportunities. Each chapter features introductory and concluding comments from the editors.”

Diversity and Complexity

by Scott Page, 2011. Available on Google Books Princeton University Press, 14/07/2011 – 296 pages

Abstract: This book provides an introduction to the role of diversity in complex adaptive systems. A complex system–such as an economy or a tropical ecosystem–consists of interacting adaptive entities that produce dynamic patterns and structures. Diversity plays a different role in a complex system than it does in an equilibrium system, where it often merely produces variation around the mean for performance measures. In complex adaptive systems, diversity makes fundamental contributions to system performance. Scott Page gives a concise primer on how diversity happens, how it is maintained, and how it affects complex systems. He explains how diversity underpins system level robustness, allowing for multiple responses to external shocks and internal adaptations; how it provides the seeds for large events by creating outliers that fuel tipping points; and how it drives novelty and innovation. Page looks at the different kinds of diversity–variations within and across types, and distinct community compositions and interaction structures–and covers the evolution of diversity within complex systems and the factors that determine the amount of maintained diversity within a system.Provides a concise and accessible introduction. Shows how diversity underpins robustness and fuels tipping points .Covers all types of diversity. The essential primer on diversity in complex adaptive systems.

RD Comment: This book is very useful for thinking about the measurement of diversity. In 2000 I wrote a paper “Does Empowerment Start At Home? And If So, How Will We Recognise It?” in which I argued that…

“At the population level, diversity of behaviour can be seen as a gross indicator of agency (of the ability to make choices), relative to homogenous behaviour by the same set of people. Diversity of behaviour suggests there is a range of possibilities which individuals can pursue. At the other extreme is standardisation of behaviour, which we often associate with limited choice. The most notable example being perhaps that of an army. An army is a highly organised structure where individuality is not encouraged, and where standardised and predictable behaviour is very important. Like the term “NGO” or “non-profit”, diversity is defined by something that it is not –  a condition where there is no common constraint, which would otherwise lead to a homogeneity of response. Homogeneity of behaviour may arise from various sources of constraint. A flood may force all farmers in a large area to move their animals to the high ground. Everybody’s responses are the same, when compared to what they would be doing on normal day. At a certain time of the year all farmers may be planting the same crop. Here homogeneity of practice may reflect common constraints arising from a combination of sources: the nature of the physical environment, and the nature of particular local economies. Constraints on diversity can also arise within the assisting organisation. Credit programs can impose rules on loan use, specific repayment schedules and loan terms, as well as limiting when access to credit is available, or how quickly approval will be give.”

See also…