Tweet [1]
The “Rick on the Road [2]” blog is where I write about specific issues M&E issues that are of interest to me. Here is the list of the most recent “Rick on the Road” editorial postings:
2023
- Finding useful distinctions between different futures [3]
- How can evaluators practically think about multiple Theories of Change in a particular context? [4]
2022
- Four types of futures that should be covered by a Theories of Change [5]
- We need more doubt and uncertainty! [6]
- Using ParEvo to conduct thought experiments [7]
- Alternative futures as “search strategies” [8]
- Budgets as theories [9]
- Making small samples of large populations useful [10]
2021
- Choosing between simpler and more complex versions of a Theory of Change [11]
- Exploring counterfactual histories of an intervention [12]
- Reconciling the need for both horizontal and vertical dimensions in a Theory of Change diagram [13]
- Diversity and complexity? Where should we focus our attention? [14]
- Paired case comparisons as an alternative to a configurational analysis (QCA or otherwise) [15]
- The potential use of Scenario Planning methods to help articulate a Theory of Change [16]
- Mapping the “structure of cooperation”: Adding the time dimension and thinking about further analyses [17]
- Connecting Scenario Planning and Theories of Change [18]
2020:
- The implications of complex program designs: Six proposals worth exploring [19]?
- “If you want to think outside of the box, you first need to find the box” – some practical evaluative thinking about Futures Literacy [20]
- Has the meaning of impact evaluation been hijacked? [21]
- Quality of Evidence criteria that can be applied to Most Significant Change (MSC) stories [22]
- Mapping the structure of cooperation [23]
- EvalC3 versus QCA – compared by a re-analysis of one data set [24]
- Converting a continuous variable into a binary variable i.e. dichotomising [25]
- Rubrics? Yes, but… [26]
- Temporal networks: Useful static representations of dynamic events [27]
2019:
- Combining the use of the Confusion Matrix as a visualisation tool with a Bayesian view of probability [28]
- On finding the weakest link… [29]
- Participatory design of network models: Some implications for analysis [30]
- Extracting additional value from the analysis of QuIP data [31]
- On evaluating innovation [32]
- Where there is no (decent / usable) Theory of Change… [33]
- On using clustering algorithms to help with sampling decisions [34]
2018:
2017
2016
- …and then a miracle happens (or two or three) [37]
- Three ways of thinking about linearity [38]
- EvalC3 – an Excel-based package of tools for exploring and evaluating complex causal configurations [39]
- Why we should also pay attention to “what does not work” [40]
- Why I am sick of Evaluation Questions! [41]
2015
- False Positives – why we should pay more attention to them [42]
- Macro versus meta Theories of Change [43]
- Clustering projects according to similarities in outcomes they achieve [44]
- Evolving better performing hypotheses, using Excel [45]
- Is QCA its own worst enemy? [46]
- Characterising purposive samples [47]
- Evaluating the performance of binary predictions [48]
- How to select which hypotheses to test? [49]
- In defense of the (careful) use of algorithms and the need for dialogue between tacit (expertise) and explicit (rules) forms of knowledge [50]
- A mistaken criticism of the value of binary data [51]
2014
- Comparing QCA and Decision Tree models – an ongoing discussion [52]
- Pair comparisons: For where there is no common outcome measure? [53]
- The challenges of using QCA [54]
- Thinking about set relationships within monitoring data [55]
2013
- Complex Theories of Change: Recipes for failure or for learning? [56]
- Measuring the impact of ideas: Some testable propositions [57]
- A reverse QCA? [58]
- Another perspective on the uses of control groups [59]
- An example application of Decision Tree models [60]
- My problem with RCTs [61]
2012
- Evolutiona [62]r [62]y strat [62]egies for complex environments [62]
- AusAID’s ‘Revitalising Indonesia’s Knowledge Sector for Development Policy’ program [63]
- Open source evaluation – the way forward? [64]
- Representing different combinations of causal conditions [65]
- A perspective on “Value for Money” relationships [66]
- Data mining algorithms as evaluation tools [67]
- Criteria for assessing the evaluability of Theories of Change [67]
- Can we evolve explanations of observed outcomes? [68]
- Modular Theories of Change: A means of coping with diversity and change? [69]
- Evaluation questions: Managing agency, bias and scale [70]
2011
- Evaluation quality standards: Theories in need of testing? [71]
- Relative rather than absolute counterfactuals: A more useful alternative? [72]
- Evaluation methods looking for projects or projects seeking appropriate evaluation methods? [73]
- Models and reality: Dialogue through simulation [74]
- A submission to the UK Independent Commission for Aid Impact (ICAI) [74]
2010
-
- Counter-factuals and counter-theoreticals: What to do when random assignment is not an option…. [75]
- Do we need a Minimal Level of Failure (MLF)? [76]
- Meta-narratives, evaluation and complexity [77]
- Cynefin Framework versus Stacey Matrix versus network perspectives [78]
- Evaluating a composite Theory of Change (ToC) [79]
2009
- Reflections on Dave Snowden’s presentations on sense-making and complexity [80]
- On the poverty of baselines and targets… [81]
- Why we should make economists work harder [82]
- Constructing longer term perspectives [83]
- Bibliographic Timelines [84]
2008
- Comments on the draft DFID evaluation policy [85]
- An aid bubble? – Interpreting aid trends [86]
- Aid organisations as self-interested businesses? [87]
- Social Frameworks: An improvement on the Logical Framework? [88]
- Assessing achievements in Katine, Uganda [89]
- A network approach to the selection of “Most Significant Change” stories [90]
2007
- Managing expectations about monitoring and evaluation in Katine [91]
- Katine: an experiment in more publicly transparent aid processes [92]
- Checklists as mini theories-of-change [93]
- Evolving storylines: A participatory design process? [94]
- Prediction markets as a source of independent and continuous evaluation for development projects? [95]
2006
- Assumptions, evidence and multiple stakeholders [96]
- Evidence that the (development) world is getting better [97]
- Integrating funding applications and baseline surveys [98]
- The risks of big increases in aid flows to poor countries [99]
- The “attribution problem” problem [100]
2005
- Impact pathways and genealogies [101]
- Networks of Indicators [102]
- Fight institutional Alzheimers [103]
- Using “modular matrices” to describe programme intentions and achievements [104]
- Constructing “an auditable trail of intentions….” [105]
- Identifying the impact of evaluations: Follow the money? [106]
- Learning circles and loops: Time for some more sophisticated representations [107]
2004
- No more paradigm changes please! [109]
- Where have all the evaluations gone? [110]
- Projects versus Project Funding Mechanisms [111]
- Treating organisations as though they were machines [112]
- Is moving the goal posts a good thing? [113]
- Where are the partners? [114]
- Monitoring empowerment: A contradiction in terms? [115]
- Why did the chicken cross the road? [116]
- Thinking about networks of policies [117]
- Question: How do you assess a country’s ownership of a PRSP? [118]
- Hypothesis-led Surveys of Influence – on KAP [119]
- PRSP Monitoring: Target fixation and mission creep [120]
Readers may also be interested in
- The Rick’s Methods [121] page on this site
- Rick Davies’ comments on other websites and blogs [122] [in process]
- My YouTube presentations [123] from some training events and conferences
- See also Evaluating Katine [124]