“Six Years of Lessons Learned in Monitoring and Evaluating Online Discussion Forums”

by Megan Avila, Kavitha Nallathambi, Catherine Richey, Lisa Mwaikambo– in Knowledge Management & E-Learning: An International Journal (KM&EL), Vol 3, No 4 (2011)

….which looks at how to evaluate virtual discussion forums held on the IBP (Implementing Best Practices in Reproductive Health) Knowledge Gateway – a platform for global health practitioners to exchange evidence-based information and knowledge to inform practice. Available as pdf  Found courtesy of Yaso Kunaratnam, IDS

Abstract: “This paper presents the plan for evaluating virtual discussion forums held on the Implementing Best Practices in Reproductive Health (IBP) Knowledge Gateway, and its evolution over six years. Since 2005, the World Health Organization Department of Reproductive Health and Research (WHO/RHR), the Knowledge for Health (K4Health) Project based at Johns Hopkins Bloomberg School of Public Health’s Center for Communication Programs (JHU?CCP), and partners of the IBP Initiative have supported more than 50 virtual discussion forums on the IBP Knowledge Gateway. These discussions have provided global health practitioners with a platform to exchange evidence-based information and knowledge with colleagues working around the world. In this paper, the authors discuss challenges related to evaluating virtual discussions and present their evaluation plan for virtual discussions. The evaluation plan included the following three stages: (I) determining value of the discussion forums, (II) in-depth exploration of the data, and (III) reflection and next steps and was guided by the “Conceptual Framework for Monitoring and Evaluating Health Information Products and Services” which was published as part of the Guide to Monitoring and Evaluation of Health Information Products and Services. An analysis of data from 26 forums is presented and discussed in light of this framework. The paper also includes next steps for improving the evaluation of future virtual discussions.”


“Unleashing the potential of AusAID’s performance data”

A posting on the Development Policy Blog by Stephen Howes, on 15 february 2012.

This blog examines AusAID’s Office of Development Effectiveness latest annual report released just before Christmas 2010, which was published in two parts, one providing an international comparative perspective (and summarized in this blog), the other drawing on and assessing internal performance reporting. In this blog the author continues his analysis of the  “internal assessment” report.

He points out how the report data shows that poor performance is a much more significant problem than outright fraud. He also examines the results of ODE’s spotchecks on the quality of the self-assessment ratings. There is much else there in the blog that is also of interest.

Of special interest are the concluding paras: “This systematic collation of project self-ratings and the regular use of spot checks is best practice for any aid agency, and something AusAID should take pride in. The problem is that, as illustrated above, the reporting and analysis of these two rich sources of data is at the current time hardly even scratching the surface of their potential.

One way forward would be for ODE or some other part of AusAID to undertake and publish a more comprehensive report and analysis of this data. That would be a good idea, both to improve aid effectiveness and to enhance accountability.

But I have another suggestion. If the data is made public, we can all do our own analysis. This would tremendously enhance the debate in Australia on aid effectiveness, and take the attention away from red-herrings such as fraud towards real challenges such as  value-for-money.

AusAID’s newly-released Transparency Charter[pdf] commits the organization to releasing publishing “detailed information on AusAID’s work” including “the results of Australian aid activities and our evaluations and research.”  The annual release of both the self-ratings and the spot-checks would be a simple step, but one which would go a long way to fulfilling  the Charter’s commitments.”

PS: Readers may be interested in similar data made available by DFID in recent years. See Do we need a minimum level of failure blog posting


What shapes research impact on policy?

…Understanding research uptake in sexual and reproductive health policy processes in resource poor contexts

Andy Sumner, Jo Crichton, Sally Theobald, Eliya Zulu and Justin Parkhurst. Health Research Policy and Systems 2011, 9(Suppl 1):S3 Published: 16 June 2011

Abstract “Assessing the impact that research evidence has on policy is complex. It involves consideration of conceptual issues of what determines research impact and policy change. There are also a range of methodological issues relating to the question of attribution and the counter-factual. The dynamics of SRH, HIV and AIDS, like many policy arenas, are partly generic and partly issue- and context-specific. Against this background, this article reviews some of the main conceptualisations of research impact on policy, including generic determinants of research impact identified across a range of settings, as well as the specificities of SRH in particular. We find that there is scope for greater cross-fertilisation of concepts, models and experiences between public health researchers and political scientists working in international development and research impact evaluation. We identify aspects of the policy landscape and drivers of policy change commonly occurring across multiple sectors and studies to create a framework that researchers can use to examine the influences on research uptake in specific settings, in order to guide attempts to ensure uptake of their findings. This framework has the advantage that distinguishes between pre-existing factors influencing uptake and the ways in which researchers can actively influence the policy landscape and promote research uptake through their policy engagement actions and strategies. We apply this framework to examples from the case study papers in this supplement, with specific discussion about the dynamics of SRH policy processes in resource poor contexts. We conclude by highlighting the need for continued multi-sectoral work on understanding and measuring research uptake and for prospective approaches to receive greater attention from policy analysts.”

Social Psychology and Evaluation

by Melvin M. Mark PhD (Editor), Stewart I. Donaldson PhD (Editor), Bernadette Campbell PhD (Editor) Guilford Press, May 2011. Available on Google Books.
Book burb “This compelling work brings together leading social psychologists and evaluators to explore the intersection of these two fields and how their theory, practices, and research findings can enhance each other. An ideal professional reference or student text, the book examines how social psychological knowledge can serve as the basis for theory-driven evaluation; facilitate more effective partnerships with stakeholders and policymakers; and help evaluators ask more effective questions about behavior. Also identified are ways in which real-world evaluation findings can identify gaps in social psychological theory and test and improve the validity of social psychological findings–for example, in the areas of cooperation, competition, and intergroup relations. The volume includes a useful glossary of both fields’ terms and offers practical suggestions for fostering cross-fertilization in research, graduate training, and employment opportunities. Each chapter features introductory and concluding comments from the editors.”

Diversity and Complexity

by Scott Page, 2011. Available on Google Books Princeton University Press, 14/07/2011 – 296 pages

Abstract: This book provides an introduction to the role of diversity in complex adaptive systems. A complex system–such as an economy or a tropical ecosystem–consists of interacting adaptive entities that produce dynamic patterns and structures. Diversity plays a different role in a complex system than it does in an equilibrium system, where it often merely produces variation around the mean for performance measures. In complex adaptive systems, diversity makes fundamental contributions to system performance. Scott Page gives a concise primer on how diversity happens, how it is maintained, and how it affects complex systems. He explains how diversity underpins system level robustness, allowing for multiple responses to external shocks and internal adaptations; how it provides the seeds for large events by creating outliers that fuel tipping points; and how it drives novelty and innovation. Page looks at the different kinds of diversity–variations within and across types, and distinct community compositions and interaction structures–and covers the evolution of diversity within complex systems and the factors that determine the amount of maintained diversity within a system.Provides a concise and accessible introduction. Shows how diversity underpins robustness and fuels tipping points .Covers all types of diversity. The essential primer on diversity in complex adaptive systems.

RD Comment: This book is very useful for thinking about the measurement of diversity. In 2000 I wrote a paper “Does Empowerment Start At Home? And If So, How Will We Recognise It?” in which I argued that…

“At the population level, diversity of behaviour can be seen as a gross indicator of agency (of the ability to make choices), relative to homogenous behaviour by the same set of people. Diversity of behaviour suggests there is a range of possibilities which individuals can pursue. At the other extreme is standardisation of behaviour, which we often associate with limited choice. The most notable example being perhaps that of an army. An army is a highly organised structure where individuality is not encouraged, and where standardised and predictable behaviour is very important. Like the term “NGO” or “non-profit”, diversity is defined by something that it is not –  a condition where there is no common constraint, which would otherwise lead to a homogeneity of response. Homogeneity of behaviour may arise from various sources of constraint. A flood may force all farmers in a large area to move their animals to the high ground. Everybody’s responses are the same, when compared to what they would be doing on normal day. At a certain time of the year all farmers may be planting the same crop. Here homogeneity of practice may reflect common constraints arising from a combination of sources: the nature of the physical environment, and the nature of particular local economies. Constraints on diversity can also arise within the assisting organisation. Credit programs can impose rules on loan use, specific repayment schedules and loan terms, as well as limiting when access to credit is available, or how quickly approval will be give.”

See also…