A guide for planning and strategy development in the face of complexity

By Richard Hummelbrunner and Harry Jones
ODI Background Note, March 2013. Available as pdf

“Many argue that governments, non-governmental organisations and international agencies should spend less time planning in advance and more time adapting programmes to changing scenarios and learning by doing. In the complex context of development, how can policy makers, managers and practitioners best plan in the face of complexity? Does complexity make planning an irrelevant exercise?

This Background Note is a guide, explaining how planning and strategy development can be carried out despite complexity. While it is true that complex situations require a greater focus on learning and adaptation, this does not render plan­ning irrelevant. In fact, there are ways in which the processes and products of planning can respect the realities of the situation and set up interven­tions (policies, programmes and projects) to give them the best chance of success.

The guide builds on academic, policy and programmatic literature related to themes around systems and complexity  and draws on the authors’ experience of advising development agencies and governments in both developed and developing countries.

The note covers three points:

  1. How to recognise a complex situation what challenges it will pose
  2. Principles for planning in the face of complexity
  3. Examples of planning approaches that address complexity”

Rick Davies comment: Over two hundred years ago William Blake exclaimed in verse “Pray God us keep From Single vision & Newton’s sleep”  If he was to read the current literature on complexity, planning and evaluation he might be tempted to repeat his advice, again and again, until it seeped through. Why do I think this? I searched this ODI paper for three magic words: diversity, difference and variation. Their existence in real life is the raw fuel for evolutionary processes, one that has enabled living organisms to survive amidst radically changeable environments over aeons of time on earth. And lo and behold, most of these organisms dont seem to have much in the way of planning units or strategy formulation processes. Evolution is a remarkably effective but non-teleological (i.e. goal driven) process of innovation and adaptation.

While I did not find the words diversity and variation in the ODI text, I was pleased for find one brief reference to the use of evolutionary processes, as follows:

Another option is an ‘evolutionary’ approach, whereby a plan is not seen as single ‘big bet’ but rather as a portfolio of experiments, by setting an over-riding goal and then pursuing a diverse sets of plans simultaneously, each of which has the potential to evolve. We could also adopt a ‘breadth first’ approach with ‘trial and error’ as the central aim of the initial stage of implementation, to encourage parallel testing of a variety of small-scale interventions”

One means of ensuring sufficient diversity in experiments is to decentralise resources and the control over those resources. This can happen in projects which have explicit empowerment objectives and also in other kinds of projects that are large in scale and working in a diversity of environments, where central controls can be loosened, either by accident or intention. In my experience there are already plenty of natural experiments with experimentation underway, the problem is the failure to capitalise on them. One reason being the continued fixation with a single vision, that is, an over-arching Theory of Change, embeded in a LogFrame and/or other planning formats, which end up dominating evaluators’ attention and use of time. This includes my own evaluation practice, mea culpa, notably with four projects in Indonesia between 2005 and 2010.

The alternative is to develop testable models that incorporate mulliple causal pathways. In the past I have emphasised the potential of network models of change, where changes can be affected via multiple influence pathways within complex networks of relationships between different actors. The challenge with this approach is to develop adequate descriptions of those networks and the pathways within them. More recently I have been argueing for the use of a simpler representational device, known as Decision Tree models, which can be constructed, and triangulated, using a variety of means (QCA, data mining algorithms, participatory and ethnographic techniques). The characteristics of a portfolio of diverse activities can be summarised in the form of Decision Tree models, which can then be tested for their  degree of fit with observed differences in outcomes of those activities. The structure of Decision Tree models enables them to represent multiple configurations of different causal conditions, identified before and/or after their implementation. More information on their design and use is provided in this paper “Where there is no single Theory of Change: The uses of Decision Tree models” While I have shared this paper with various writers on evaluation and complexity, none seem to have seen its relevance to complexity issues, possibly because in many writings on complexity, the whole issue of diversity gets much less attention than the issue of unpredictablity. I say this with some hesitation, since Ben Ramalingam’s forthcoming book on complexity does have a whole section on the perils of “Best-practicitis” i.e single vision views of development.

Incidentally, for an interesting but demanding read on the many relationships between diversity and complexity I recommend Scott Page’s “Diversity and Complexity” (2011)

 

Improving the Evaluability of INGO Empowerment and Accountability Programmes

Shutt, C. and McGee, R. CDI Practice Paper 1 March 2013 Publisher IDS Available as pdf (109kb)

Abstract
This CDI Practice Paper is based on an analysis of international NGO (INGO) evaluation practice in empowerment and accountability (E&A) programmes commissioned by CARE UK, Christian Aid, Plan UK and World Vision UK. It reviews evaluation debates and their implications for INGOs. The authors argue that if INGOs are to successfully ‘measure’ or assess outcomes and impacts of E&A programmes, they need to shift attention from methods to developing more holistic and complexity-informed evaluation strategies during programme design. Final evaluations or impact assessments are no longer discrete activities, but part of longer-term learning processes. Given the weak evaluation capacity within the international development sector, this CDI Practice Paper concludes that institutional donors must have realistic expectations and support INGOs to develop their evaluation capacity in keeping with cost–benefit considerations. Donors might also need to reconsider the merits of trying to evaluate the ‘impact’ of ‘demand-side’ NGO governance programmes independently of potentially complementary ‘supply-side’ governance initiatives.

See also: Tools and Guidelines for Improving the Evaluability of INGO Empowerment and Accountability Programmes Centre for Development Impact, Practice paper. No.1 Annex March 2013

Livestreaming of the Impact, Innovation & Learning conference, 26-27 March 2013

(via Xceval)

Dear Friends
You may be interested in following next week’s Impact, Innovation and Learning conference, whose principle panel sessions are being live-streamed. Keynote speakers and panellists include:
  • Bob Bob Picciotto (King’s College, UKES, EES), Elliot Stern, Editor of ‘Evaluation’, Bruno Marchal (Institute of Tropical Medicine, Antwerp), John Grove (Gates Foundation), Ben Ramalingan (ODI) ,Aaron Zazueta (GEF),Peter Loewe (UNIDO), Martin Reynolds (Open University),Bob Williams (Bob Williams), Richard Hummelbrunner (OAR), Patricia Rogers (Royal Melbourne Inst of Technology), Barbara Befani (IDS, EES), Laura Camfield and Richard Palmer-Jones (University of East Anglia), Chris Barnett (ITAD/IDS), Giel Ton (University of Wagenigen) ,John Mayne, Jos Vaessen (UNESCO), Oscar Garcia (UNDP), Lina Payne (DFID), Marie Gaarder (World Bank), Colin Kirk (UNICEF), Ole Winckler Andersen (DANIDA)

Impact, Innovation and Learning – live-streamed event, 26-27 March 2013

Current approaches to the evaluation of development impact represent only a fraction of the research methods used in political science, sociology, psychology and other social sciences. For example, systems thinking and complexity science, causal inference models not limited to counterfactual analysis, and mixed approaches with blurred ‘quali-quanti’ boundaries, have all shown potential for application in development settings. Alongside this, evaluation research could be more explicit about its values and its learning potential for a wider range of stakeholders. Consequently, a key challenge in evaluating development impact is mastering a broad range of approaches, models and methods that produce evidence of performance in a variety of interventions in a range of different settings.
The aim this event, which will see the launch of the new Centre for Development Impact (www.ids.ac.uk/cdi), is to shape a future agenda for research and practice in the evaluation of development impact. While this is an invitation-only event, we will be live-streaming the main presentations from the plenary sessions and panel discussions. If you would like to register watch any of these sessions online, please contact Tamlyn Munslow in the first instance at t.munslow@ids.ac.uk.
More information at:
http://www.ids.ac.uk/events/impact-innovation-and-learning-towards-a-research-and-practice-agenda-for-the-future If you are unable to watch the live-streamed events, there will be an Watch Again option, after the conference.
With best wishes,
Emilie Wilson
Communications Officer
Institute of Development Studies

Rick Davies comment 28 March 2013: Videos of 9 presentations and panels are now available online at http://www.ustream.tv/recorded/30426381

Do we need more attention to monitoring relative to evaluation?

This post title was prompted by my reading of Daniel Ticehurst’s paper (below), and some of my reading of literature on complexity theory and on data mining.

First, Daniel’s paper: Who is listening to whom, and how well and with what effect?   Daniel Ticehurst, October 16th, 2012. 34 pages

Abstract:

“I am a so called Monitoring and Evaluation (M&E) specialist although, as this paper hopefully reveals, my passion is monitoring. Hence I dislike the collective term ‘M&E’. I see them as very different things. I also dislike the setting up of Monitoring and especially Evaluation units on development aid programmes: the skills and processes necessary for good monitoring should be an integral part of management; and evaluation should be seen as a different function. I often find that ‘M&E’ experts, driven by donor insistence on their presence backed up by so-called evaluation departments with, interestingly, no equivalent structure, function or capacity for monitoring, over-complicate the already challenging task of managing development programmes. The work of a monitoring specialist, to avoid contradicting myself, is to help instil an understanding of the scope of what a good monitoring process looks like. Based on this, it is to support those responsible for managing programmes to work together in following this process through so as to drive better, not just comment on, performance.”

“I have spent most of my 20 years in development aid working on long term assignments mainly in various countries in Africa and exclusively on ‘M&E’ across the agriculture and private sector development sectors hoping to become a decent consultant. Of course, just because I have done nothing else but ‘M&E.’ does not mean I excel at both. However, it has meant that I have had opportunities to make mistakes and learn from them and the work of others. I make reference to the work of others throughout this paper from which I have learnt and continue to learn a great deal.”

“The purpose of this paper is to stimulate debate on what makes for good monitoring. It  draws on my reading of history and perceptions of current practice, in the development aid and a bit in the corporate sectors. I dwell on the history deliberately as it throws up some good practice, thus relevant lessons and, with these in mind, pass some comment on current practice and thinking. This is particularly instructive regarding the resurgence of the aid industry’s focus on results and recent claims about how there is scant experience in involving intended beneficiaries and establishing feedback loops, in the agricultural sector anyway.The main audience I have in mind are not those associated with managing or carrying out evaluations. Rather, this paper seeks to highlight particular actions I hope will be useful to managers responsible for monitoring (be they directors in Ministries, managers in consulting companies, NGOs or civil servants in donor agencies who oversee programme implementation) and will improve a neglected area.”

 Rick Davies comment: Complexity theory writers seem to give considerable emphasis to the idea of constant  change and substantial unpredictability of complex adaptive systems (e.g. most human societies). Yet surprisingly enough we find more writings on complexity and evaluation than we do on complexity and monitoring.  For a very crude bit of evidence compare Google searches for “monitoring and complexity  -evaluation” and “evaluation and complexity -monitoring”. There are literally twice as many search results for the second search string. This imbalance is strange because monitoring typically happens more frequently and looks at smaller units of time, than evaluation. You would think its use would be more suited to complex projects and settings.  Is this because we have not had in the past the necessary analytic tools to make best use of monitoring data? Is it also because the audiences for any use of the data have been quite small, limited perhaps to the implementing agency, their donor(s) and the intended beneficiaries at best? The latter should not longer be the case, given the global movement for greater transparency in the operations of aid programs, aided by continually widening internet access. In addition to the wide range of statistical tools suitable for hypothesis testing (generally under-utilised, even in their simplest forms e.g. chi-square tests) there are now a range of data mining tools that are useful for more inductive pattern finding purposes. (Dare I say it, but…) These are already in widespread use by big businesses to understanding and predict their customers behaviors (e.g. their purchasing decisions). The analytic tools are there, and available in in free open source forms (e.g. RapidMiner)

Dealing with complexity through “actor-focused” Planning, Monitoring & Evaluation (PME)

From results-based management  towards results-based learning
Jan Van Ongevalle (HIVA), Huib Huyse (HIVA), Cristien Temmink (PSO), Eugenia Boutylkova (PSO), Anneke Maarse (Double Loop)
November 2012. Available as pdf

This document is the final output of the PSO Thematic Learning Programme (TLP) on Planning, Monitoring and Evaluation (PME) of Complex Processes of Social Change, facilitated and funded by PSO, Netherlands and supported by HIVA (Belgium).

1. Introduction

This paper reports the results of a collaborative action-research process (2010-2012) in which 10 development organisations (nine Dutch and one Belgian), together with their  Southern partners, explored if and how a variety of Planning, Monitoring and Evaluation  (PME) approaches and methods helped them  deal with processes of complex change. These  approaches include Outcome Mapping (OM),  Most Significant Change (MSC), Sensemaker,  client-satisfaction instruments, personal-goal  exercises, outcome studies, and scorecards.

The study has been supported by PSO, an  association of Dutch development organisations that supports capacity-development  processes. The Research Institute for Work and Society (HIVA) at the University of Leuven (KU Leuven) provided methodological support.

The collaborative-action research took place on two interconnected levels. At the first level, individual organisations engaged in their own action-research processes in order to address their organisation-specific PME challenges. At a collective level, we wanted to draw lessons from across the individual cases. The overall aim was to find out if and how the various PME approaches piloted in the cases had helped the organisations and their partners to deal with complex change processes. We tried to answer this question by exploring how the PME approaches assisted the pilot cases to deal with the following four implications of PME in complexity: 1) dealing with multiple relations and perspectives; 2) learn about the results of the programme; 3) strengthen adaptive capacity;  and 4) satisfy different accountability needs.  These four questions constitute the main analytic framework of the action research.

A PME approach in this paper refers to the PME methods, tools and concepts and the way they are implemented within a specific context of a programme or organisation. A PME approach also encompasses the underlying values, principles and agenda that come with its methods, tools and concepts. A PME system refers to the way that PME approaches and PME related activities are practically organised, interlinked and implemented within a specific context of a programme or organisation.

Part of the uniqueness of this paper stems from the fact that it is based on the “real life” experiences of the ten pilot cases, where the participants took charge of their own individual action-research processes with the aim of strengthening their PME practice. The results  presented in this article are based on an analysis across the 10 cases. It is the result of close collaboration with representatives of the different cases through various rounds of revision. A group of external advisors also gave input in the cross case analysis. Extracts of the different  cases are given throughout the results chapter  to illustrate arguments made. More detailed information about each case can be found in the individual case reports, which are available at:  https://partos.nl/content/planning-monitoring-and-evaluation -complex-processes-social-change

Pan Africa-Asia Results-Based M&E Forum, Bangkok, Nov 2012 – Presentations now available

The 2012 Pan Africa-Asia Results-Based M&E Forum

Bangkok November 26-28

Sixteen presentations over three days  listed and available online here.

Monday 26 November, 2012

Dr John Mayne, Independent Public Sector Performance Adviser,  ”Making Causal Claims” (9.15 – 10.15am)

Jennifer Mullowney, Senior RBM&E Specialist, CIDA.  ”How to Apply Results-Based Management in Fragile States and Situations: Challenges, Constraints, and Way Forward”. (10.15 to 10.45 am)

Shakeel Mahmood, Coordinator Strategic Planning & M&E, ICDDR.  Strategies for Capacity Building for Health Research in Bangladesh: Role of Core Funding and a Common Monitoring and Evaluation Framework”.  (11.30 -12 noon)

Troy Stremler, CEO, Newdea Inc“Social Sector Trends”  ( 1.40 – 2.40 pm)

Dr Carroll Patterson, Co-founder, SoChaFrom M&E to Social Change: Implementation Imperatives.” 2.40 – 3.10 pm

Susan Davis, Executive Director, Improve International and Marla-Smith-Nilson, Executive Director, Water 1st International), “A Novel Way to Promote Accountability in WASH: Results from First Water & Sanitation Accountability Forum & Plans for the Future. (3.55 – 4.25 pm)

 

 Tuesday 27 November, 2012

Sanjay Saxena, Director, TSCPL Director, M&E/MIS System Consultant.  “Challenges in Implementing M&E systems for Reform Programs.”  (9.15 – 10.15 am)

Chung Lai, Senior M&E Officer, International Relief and Development.  “ Using Data Flow Diagrams in Data Management Processes (demonstration)” (10.15 – 10.45 pm)

Korapin Tohtubtiang, International Livestock Research Institute, Thailand, Lessons Learned from Outcome Mapping in an IDRC Eco-Health Project.”   (11.30 – 12 noon)

Dr Paul Duignan of  DoView, International Outcomes and Evaluation Specialist. “Anyone Else Think the Way We Do Our M&E Work is Too Cumbersome and Painful? Using DoView Visual Strategic Planning & Success Tracking M&E Software – Simplifying, Streamlining and Speeding up Planning, Monitoring and Evaluation” (1.40 – 2.40 pm)

Ahmed Ali, M&E Specialist, FATA Secretariat, Multi-Donor Trust Fund & the World Bank. The Sub-national M&E Systems of the Government of Khyber Pakhtunkhwa and FATA – the Case Study of M&E Multi-donor Trust Fund Projects.” 2.40 – 3.10 pm

Global Health Access Program (GHAP) Backpack Health Worker Teams, Thailand, Cross-border M&E of Health Programs Targeting Internally Displaced Persons (IDPs) in Conflict-affected Regions of Eastern Burma (3.55 – 4.25 pm)

 

Wednesday 28 November, 2012

Dr .V. Rengarajan (Independent M&E & Micro-financing Consultant, Author, & Researcher). What is Needed is an Integrated Approach to M&E.” (9.15 – 10.15 am)

Dr Lesley Williams,  Independent M&E & Capacity-building Consultant, Learn MandE. “Value for Money (VfM): an Introduction.” (10.15 – 10.45 am)

Eugenia Boutylkova (Program Officer, PSO, Holland) and Jan Van Ongevalle (Research Manager, HIVA/KULeuven, Belgium). Thematic Learning Program (TLP): Dealing with Complexity through Planning, Monitoring & Evaluation (PME) (11.30 – 12 noon)

Catharina Maria. Does the Absence of Conflict Indicate a Successful Peace-building Project? (1.40 – 2.40 pm)

 

AEA Conference: Evaluation in Complex Ecologies

Relationships, Responsibilities, Relevance
26th Annual Conference of the American Evaluation Association
Minneapolis, Minnesota, USA
Conference: October 24-27, 2012
Workshops: October 22, 23, 24, 28

“Evaluation takes place in complex global and local ecologies where we evaluators play important roles in building better organizations and communities and in creating opportunities for a better world. Ecology is about how systems work, engage, intersect, transform, and interrelate. Complex ecologies are comprised of relationships, responsibilities, and relevance within our study of programs, policies, projects, and other areas in which we carry out evaluations.

Relationships. Concern for relationships obliges evaluators to consider questions such as: what key interactions, variables, or stakeholders do we need to attend to (or not) in an evaluation? Evaluations do not exist in a vacuum disconnected from issues, tensions, and historic and contextualized realities, systems, and power dynamics. Evaluators who are aware of the complex ecologies in which we work attend to relationships to identify new questions and to pursue new answers. Other questions we may pursue include:

  • Whose interests and what decisions and relationships are driving the evaluation context?
  • How can evaluators attend to important interactions amidst competing interests and values through innovative methodologies, procedures, and processes?

Responsibilities. Attention to responsibilities requires evaluators to consider questions such as: what responsibilities, inclusive of and beyond the technical, do we evaluators have in carrying out our evaluations? Evaluators do not ignore the diversity of general and public interests and values in evaluation. Evaluations in complex ecologies make aware ethical and professional obligations and understandings between parties who seek to frame questions and insights that challenge them. Other questions we may pursue include:

  • How can evaluators ensure their work is responsive, responsible, ethical, equitable, and/or transparent for stakeholders and key users of evaluations?
  • In what ways might evaluation design, implementation, and utilization be responsible to issues pertinent to our general and social welfare?

Relevance. A focus on relevance leads to evaluations that consider questions such as: what relevance do our evaluations have in complex social, environmental, fiscal, institutional, and/or programmatic ecologies? Evaluators do not have the luxury of ignoring use, meaning, of sustainability; instead all evaluations require continual review of purposes, evaluands, outcomes, and other matters relevant to products, projects, programs, and policies. Other questions we may pursue include:

  • How can evaluators ensure that their decisions, findings, and insights are meaningful to diverse communities, contexts, and cultures?
  • What strategies exist for evaluators, especially considering our transdisciplinary backgrounds, to convey relevant evaluation processes, practices, and procedures?

Consider this an invitation to submit a proposal for Evaluation 2012 and join us in Minneapolis as we consider evaluation in complex ecologies where relationships, responsibilities, and/or relevance are key issues to address.”

 

Dealing with complexity through Planning, Monitoring & Evaluation

Mid-term results of a collective action research process.
Authors: Jan Van Ongevalle, Anneke Maarse, Cristien Temmink, Eugenia Boutylkova and Huib Huyse. Published January 2012
Praxis Paper 26, available as pdf

(Text from INTRAC website) “Written by staff from PSO and HIVA, this paper shares the first results of an ongoing collaborative action research in which ten development organisations explored different Planning, Monitoring and Evaluation (PME) approaches with the aim of dealing more effectively with complex processes of social change.

This paper may be of interest as:
1) It illustrates a practical example of action research whereby the organisations themselves are becoming the researchers.
2) Unpacking the main characteristics of complexity, the paper uses an analytic framework of four questions to assess the effectiveness of a PME approach in dealing with complex social change.
3) An overview is given of how various organisations implemented different PME approaches (e.g. outcome mapping, most significant change, client satisfaction instruments) in order to deal with complex change.
4) The paper outlines the meaning and the importance of a balanced PME approach, including its agenda, its underlying principles and values, its methods and tools and the way it is implemented in a particular context.”

Conference: Evaluation in a Complex World -Balancing Theory and Practice

April 29- May 1, 2012 (Sunday-Tuesday)
Seaview Resort, Galloway, NJ, USA. (http://www.dolce-seaview-hotel.com)

Organised by the Eastern Evaluation Research Society, a Regional Affiliate of the American Evaluation Association. Flyer available here

Keynote Speaker: Jennifer Greene, University of Illinois and President of AEA Featured Speakers: Eleanor Chelimsky, U.S. Government Accountability Office and former AEA President Rodney Hopson, Dusquesne University and incoming President of AEA

Sunday Afternoon Pre-Conference Workshops and Session: Meta Analysis Ning Rui, Research for Better Schools

Focus Group Research: Planning and Implementation Michelle Revels, ICF International

Career Talk with the Experts (NEW!): An unstructured conversation about your evaluation career This session is free to participants! Sunday Evening Interactive & Networking Session: John Kelley, Villanova University Concurrent Sessions Featuring: Skill Building Sessions, Individual Presentations & Panel Sessions

A full conference program will be posted at (www.eers.org) by Mid February 2012.

Diversity and Complexity

by Scott Page, 2011. Available on Google Books Princeton University Press, 14/07/2011 – 296 pages

Abstract: This book provides an introduction to the role of diversity in complex adaptive systems. A complex system–such as an economy or a tropical ecosystem–consists of interacting adaptive entities that produce dynamic patterns and structures. Diversity plays a different role in a complex system than it does in an equilibrium system, where it often merely produces variation around the mean for performance measures. In complex adaptive systems, diversity makes fundamental contributions to system performance. Scott Page gives a concise primer on how diversity happens, how it is maintained, and how it affects complex systems. He explains how diversity underpins system level robustness, allowing for multiple responses to external shocks and internal adaptations; how it provides the seeds for large events by creating outliers that fuel tipping points; and how it drives novelty and innovation. Page looks at the different kinds of diversity–variations within and across types, and distinct community compositions and interaction structures–and covers the evolution of diversity within complex systems and the factors that determine the amount of maintained diversity within a system.Provides a concise and accessible introduction. Shows how diversity underpins robustness and fuels tipping points .Covers all types of diversity. The essential primer on diversity in complex adaptive systems.

RD Comment: This book is very useful for thinking about the measurement of diversity. In 2000 I wrote a paper “Does Empowerment Start At Home? And If So, How Will We Recognise It?” in which I argued that…

“At the population level, diversity of behaviour can be seen as a gross indicator of agency (of the ability to make choices), relative to homogenous behaviour by the same set of people. Diversity of behaviour suggests there is a range of possibilities which individuals can pursue. At the other extreme is standardisation of behaviour, which we often associate with limited choice. The most notable example being perhaps that of an army. An army is a highly organised structure where individuality is not encouraged, and where standardised and predictable behaviour is very important. Like the term “NGO” or “non-profit”, diversity is defined by something that it is not –  a condition where there is no common constraint, which would otherwise lead to a homogeneity of response. Homogeneity of behaviour may arise from various sources of constraint. A flood may force all farmers in a large area to move their animals to the high ground. Everybody’s responses are the same, when compared to what they would be doing on normal day. At a certain time of the year all farmers may be planting the same crop. Here homogeneity of practice may reflect common constraints arising from a combination of sources: the nature of the physical environment, and the nature of particular local economies. Constraints on diversity can also arise within the assisting organisation. Credit programs can impose rules on loan use, specific repayment schedules and loan terms, as well as limiting when access to credit is available, or how quickly approval will be give.”

See also…

%d bloggers like this: