Archive for the ‘Uncategorized’ Category

Recent readings: Replication of findings (not), argument for/against “mixed methods”, use of algorithms (public accountability, cost/benefits, meta data)

Tuesday, September 12th, 2017

Recently noted papers of interest on my Twitter feed: Go Forth and Replicate: On Creating Incentives for Repeat Studies. Scientists have few direct incentives to replicate other researchers’ work, including precious little funding to do replications. Can that change? 09.11.2017 / BY Michael Schulson "A survey of 1,500 scientists, conducted by the ...

Why have evaluators been slow to adopt big data analytics?

Saturday, September 9th, 2017

This is a question posed by Michael Bamberger in his blog posting on the MERL Tech website, titled Building bridges between evaluators and big data analysts. There he puts forward eight reasons (4 main ones and 4 subsidiary points). None of which I disagree with. But I have my own perspective on ...

Order and Diversity: Representing and Assisting Organisational Learning in Non-Government Aid Organisations.

Sunday, July 23rd, 2017

No, history did not begin three years ago ;-) "It was twenty years ago today..." well almost. Here is a link to my 1998 PhD Thesis of the above title. It was based on field work I carried out in Bangladesh between 1992 and 1995. Chapter 8 describes the first implementation of what ...

Twitters posts tagged as #evaluation

Thursday, July 13th, 2017

This post should feature a continually updated feed of all Twitter tweets tagged as: #evaluation #evaluation Tweets !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);;js.src=p+"://";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");


Monday, May 8th, 2017

Kenya Heard, Elisabeth O’Toole, Rohit Naimpally, Lindsey Bressler. J-PAL North America, April 2017. pdf copy here INTRODUCTION Randomized evaluations, also called randomized controlled trials (RCTs), have received increasing attention from practitioners, policymakers, and researchers due to their high credibility in estimating the causal impacts of programs and policies. In a randomized evaluation, a random ...

Riddle me this: How many interviews (or focus groups) are enough?

Monday, May 8th, 2017

Emily Namey, R&E Search for Evidence "The first two posts in this series describe commonly used research sampling strategies and provide some guidance on how to choose from this range of sampling methods. Here we delve further into the sampling world and address sample sizes for qualitative research and evaluation ...

How to find the right answer when the “wisdom of the crowd” fails?

Sunday, April 9th, 2017

Dizekes, P. (2017). Better wisdom from crowds. MIT Office News. Retrieved from  PDF copy pdf copy Ross, E. (n.d.). How to find the right answer when the “wisdom of the crowd” fails. Nature News. Prelec, D., Seung, H. S., & McCoy, J. (2017). A solution to the single-question crowd wisdom problem.Nature, ...

Fact Checking websites serving as public evidence-monitoring services: Some sources

Thursday, March 2nd, 2017

These services seem to be getting more attention lately, so I thought it would be worthwhile compiling a list of some of the kinds of fact checking websites that exist, and how they work. Fact checkers have the potential to influence policies at all stages of the policy development and implementation ...

Integrating Big Data into the Monitoring and Evaluation of Development Programmes

Tuesday, January 24th, 2017

Bamberger, M. (2016). Integrating Big Data into the Monitoring and Evaluation of Development Programmes (2016) |. United Nations Global Pulse. Retrieved from  PDF copy available Context: "This report represents a basis for integrating big data and data analytics in the monitoring and evaluation of development programmes. The report proposes a ...

Monitoring and Evaluation in Health and Social Development: Interpretive and Social Development Perspectives

Tuesday, January 17th, 2017

Edited by Stephen Bell and Peter Aggleton. Routledge 2016. View on Google Books "interpretive researchers thus attempt to understand phenomena through accessing the meanings participants assign to them" "...interpretive and ethnographic approaches are side-lined in much contemporary evaluation work and current monitoring and evaluation practice remains heavily influenced by more positivist approaches" "attribution ...