VisuaLyzer software: for visualising and analysing networks

There are now many different software packages available that can be used to visually represent networks, and to generate many different statistical measures of their structure. Unfortunately many of these involve a steep learning curve, and involve far more bells and whistles than I need.  VisuaLyzer is my favourite software package because it is very user friendly, and easy to use.

VisuaLyzer is produced by mdlogix, USA. You can download a trial version or buy a copy from this part of their website. For more information contact Allen Tien <allen@mdlogix.com> at mdlogix. If you do contact him, please mention you heard about Visualyser on Rick Davies’s website, MandE NEWS.

My main use of Visualyzer is to draw the organisational networks I am working with, in the course of my work as an M&E consultant on development aid programmes. These are of two types: (a) literal descriptions (maps) of the relationships as known, (b) simplified models of complex networks showing the main types of organisations and the relationships between them. Less frequently, I also import data from Excel to automatically generate network maps. This data usually comes from project documents or online surveys. I also use the combination of UCINET and Netdraw for this task.

Here is an example of a network that I drew by hand directly on screen. It represents the relationships between AMREF’s partners in the Katine project, Uganda. Click on the image to expand it a new window, then click again to get a focused image. You can represent different types of actors by varying the colour, size and shape of nodes. You can represent the different kinds of relationships between them by varying the kind of line used, its colour and thickness. If you click on a node you can enter detailed text or numerical data describing the actor’s attributes, using as many fields as needed. If you click on any link you can enter data about the attributes of that relationships. Both of these sets of data can be exported, on all actors and relationships, as an Excel file.  You can also import the same kind of data, to automatically generate a network diagram.

mdlogix describe it as “an interactive tool for entering, visualizing and analyzing network data. You can create nodes and links directly or import network data from edgelist/edgearray, Excel, or GraphML formats. Once the network is displayed, you can customize visual properties such as the colour, shape, size, and location of nodes and links to create an informative graphic representation. Images of your choice may be used to represent nodes. XY mapping of nodes as a function of node attributes is supported in layered layout. It also provides a number of analysis functions for calculating network and nodal level indices, and for finding sub-groups, partitions, communities, and roles and positions. In addition, VisuaLyzer includes powerful logic programming capabilities that allow you to investigate networks using axioms of classical set theory.”

This all sounds quite complex. But in practice it is the simplest features of Visualyzer which are the most useful. It does have a very good and easy to read Users Guide (5mb), which you may want to look at.

For more on the development of network models / descriptions and their use in monitoring and evaluation go to the Network Models section of this website.

POSTSCRIPT (1st December 2008): See also Overview of Common Social Network Analysis Software Platforms “This report was developed by the Philanthropy and Networks Exploration, a partnership between the Packard Foundation and Monitor Institute. The exploration is an inquiry into how networks can facilitate greater philanthropic effectiveness. For more information, please go to http://www.philanthropyandnetworks.org

PS2 (16th January 2009): The link to the “Overview …” doc no longer works. I have now uploaded the doc HERE, after receiving a copy via the Pakard Foundation. They also sent a link to: “Working Wikily: How networks are changing social change

Networks and evaluation

This page is about two complementary perspectives: the evaluation of networks, and how a network perspective can inform the design and evaluation of development programs (which may not have been designed as networks)

Please note that the contents of this page has been cut and pasted from the old MandE NEWS website. All links go to contents on the old site. The old contents will be moved to the new website as soon as possible.

Predicting the achievements of the Katine project

September 2010: This post provides information on a revised proposal for a “Predictions Survey” on the achievements of the Katine Community Partnerships Project, a project managed by AMREF and funded by the Guardian and Barclays Bank, between 2007 and 2011.

Background Assumptions

The Guardian coverage of the Katine project has provided an unparalleled level of public transparency to the workings of an aid project. As of August 2010 there have been approximately 530 articles posted on the site, most of which have specifically about Katine. These posts have included copies of project documentation (plans, budgets, progress reports, review reports) that often don’t enter the public realm.

Ideally this level of transparency would have two benefits: (a) improving UK public knowledge about the challenges of providing effective aid, (b) imposing some constructive discipline on the work of the NGO concerned, because they know they are under continuing scrutiny not only locally, but internationally. Whether this has actually been the case is yet to be systematically assessed. However I understand the effects on the project and its local stakeholders  (i.e b above) will be subject to review by Ben Jones later this year, and then open to discussion in a one day event in November, to be organised by the Guardian.

So far there have been two kinds of opportunities for the British, and other publics, to be engaged with the public monitoring of the Katine project. One has been through posting comments on the articles on the Guardian website. About 30% of all articles have provided this opportunity, and these articles have attracted an average of 5 comments . The other option has been by invitation from the Guardian, to make a guest posting on the website. This invitation has been extended to specialists in the UK and elsewhere.  Multiple efforts have also been made to hear different voices from within the Katine community itself

The Predictions Survey would provide another kind of opportunity for participation. It would be an opportunity for a wide range of participants to:

  • to make some judgments about the overall achievements of the project, and
  • to explain those judgments, and
  • to see how those judgments compared to that of others, and
  • to see how those judgments compare to the facts, about what has actually been achieved at the end of the project

In addition a Predictions Survey would provide a means of testing expectations that greater transparency can improve public knowledge about the challenges of providing effective aid.

My proposal is that that the Prediction Survey would consist of five batches of questions, one for each project component, on a separate page. Each question would be a multiple choice question, but associated with an optional Comment field. People could respond on the basis of their existing knowledge of the project (which could vary widely) and/or extra information about the website obtained via component specific links embedded at the head of each page of the online survey e.g. on water and sanitation. Questions at the end of the survey would identify participants’ sources of knowledge about the project (e.g. obtained before and during the survey, from the website and elsewhere).

A 1st rough draft survey form is already available to view. Any responses entered at this stage may be noted, but they will then be deleted and not included in any final analysis.  The final design of the survey will require close consultation with AMREF and the Guardian.

Intended participants in the survey

  • UK public, reached via the Guardian
  • Uganda public, reached via Ugandan newspapers (likely to be more of a challenge)
  • AMREF staff, especially in Uganda, Kenya HQ and UK
  • The Guardian and Barclays, as donors
  • Monitoring and Evaluation specialists, reached via an international email list

Hypotheses (predictions about the predictions)

  1. We might expect that AMREF would be able to make the most accurate predictions, given its central role. But aid agencies are often tempted to put a gloss on their achievements, because of the gap that sometimes emerges between their ambitions and what can actually be done in practice.
  2. We might expect that participants who have been following the Guardian coverage closely since the beginning might be better informed and make better predictions than others who have become interested more recently. But perhaps those participants are still responding on the basis of their original beliefs (aka biases)?
  3. We might expect M&E specialists to make better than average predictions because of their experience in analysing project performance. But perhaps they have become too skeptical about everything they read
  4. We might expect the Guardian and Barclays staff to make better than average predictions because they have been following the project closely since inception and their organisation’s money is  invested in it. But perhaps they only want to see success.
  5. We might expect the highest frequency choices (across all groups) to be more accurate than the choices of any of the above groups, because of a ” wisdom of crowds” effect. The potential of crowdsourcing was of interest to the Guardian at the beginning of the project, and this survey could be seen as a form of crowdsourcing – of judgements.

This list is not final. Other hypotheses  could be identified in the process of consultation over the design of the survey

There may also be other less testable predictions worth identifying. For example, about the effects of this Prediction Survey on the work done by AMREF and its partners in the final year up to October 2011. Might it lead to a focus on what is being measured by the survey, to the detriment of other important aspects of their work?  If AMREF has a comprehensive monitoring framework and the prediction survey addresses the same breadth of performance (and not just one or two performance indicators) this should not be a problem.

Timeframe

The fourth and final year of the project starts in October 2010 and ends in October 2011.

The finalisation of the design of the Predictions Survey will require extensive consultation with AMREF and the Guardian, in order to ensure the fullest possible ownership of the process, and thus the results that are generated. Ideally this process might be completed by late-October 2010

The survey could be open from late October to the end of March 2011 (six months before the end of the project). All responses would be date stamped to take account of any advantages of being a later participant

A process will need to be agreed in 2010 on how objective information can be obtained on which of the multiple choice options have eventuated by October 2011.

A post 2011 follow up survey may be worth considering. This would focus on predictions of what will happen in the post-project period, up to 2014, the year of the vision statement produced by participants in the September 2009 stakeholders workshop in Katine.

“In 2014, Katine will be an active, empowered community taking responsibility for their development with decent health, education, food security and able to sustain it with the local government”

Supporters

The participation of the Guardian and AMREF will be very important, although it is conceivable that the survey could be run independently of their cooperation

Assistance with publicity, to find participants, would be needed from the Guardian and Barclays

Advisory support is being sought from the One World Trust

Advisory support from other other organisations could also be useful

The online survey could be designed and managed by Rick Davies. However responsibility could be given to another party that was agreed to by AMREF, Guardian and Barclays.

Challenges

  • The survey design needs to be short enough to encourage people to complete it, but not so short that important aspects of the project’s performance are left out
  • The description of the objectives used in the survey needs to be as clear and specific as possible, but also keep as close to AMREF’s original words as possible (i.e. as in the 4th year extension proposal, and using the M&E framework, now being updated)
  • Participants will be asked to make a single choice between multiple options, describing what might happen. These options will need to be carefully chosen, so there are no obvious “no brainers”, and to cover a range of plausible possibilities
  • It may be necessary in some cases (e.g. with some broadly defined objectives) to allow multiple choices from multiple options
  • I have heard that AMREF will be conducting a final evaluation in late 2011, using an external consultant. This evaluation could be the source of the final set of data on actual performance, against which participant’s predictions could be compared. But will it be seen as a sufficiently independent source of information?

A digression on complexity and networks…

….a side argument from the Rick on the Road post: Cynefin Framework versus Stacey Matrix versus network perspectives

In that post I said

PS1:Michael Quinn Patton’s book on Developmental Evaluation has a whole chapter on “Distinguishing Simple, Complicated, and Complex”. However, I was surprised to find that despite the book’s focus on complexity, there was not a single reference in the Index to “networks”. There was one example of a network model (Exhibit 5.3) , contrasted with a Linear Program Logic Model…” (Exhibit 5.2), in the chapter on Systems Thinking and Complexity Concepts. [I will elaborate further]

One interpretation: Complexity arises through the interaction of many agents having some degree of autonomy. With no autonomy there is simple order (complete predictability), with complete autonomy there is chaos (no predictablity). How do we define autonomy? One view: Autonomy = The number of possible relationships an actor can have with others. When realised, this can be measured in terms of  network density (a Social Network Analysis (SNA) measure). Two cariacature examples of the extremes: 1. An army, with a hierarchical chain of command,  is highly ordered. Here the network structure is  sparse (i.e.  a tree structure) and low in density. 2. “Economic man” , who is free to interact with anyone, in order to maximise his/her utility. Here all possible relationships can be realised, as everyone interacts with everyone. Complexity is the territory in between where actors have some degree of choice of who they interact with. And where there is some degree of predictability. When realised, those choices can also be described in terms of different kinds of network structures. So if we want to explore complex systems we need to look at the structure of networks of actors, both as “initial conditions” affecting what happens next and as “final states”, reflecting what has happened over a given period of time. I.e. an empirical approach, not mysticism :-)

PS: The concept of autonomy could probably be further differentiated, in terms of relationship choices, as follows : (a) the range of relationships available to an actor, already discussed above (b) the freedom to choose amongst those that are available, (c) the range of behaviors available within a given relationship. But how do you measure freedom (b) ? One measure might be the degree to which any choices made are uncorrellated with other events. The diversity of choices made could also be important. Diversity suggests freedom from constraint (more on this theme here).

Making government budgets more accessible and equitable

(from ID21)

Involvement in the budget process in poor countries has traditionally been limited to a select group of political actors. But this has changed over the last decade with legislators, civil society groups and the media playing a more active role. What impact is broader engagement having?

Research from the Institute of Development Studies, UK, examines the substance and impact of applied budget work undertaken by civil society groups. The research draws on six case studies of independent budget work in Brazil, Croatia, India, Mexico, South Africa and Uganda. One focus of the research is how civil society budget work influences government budget priorities and spending in a way that benefits poor and socially excluded groups.

Budget work is carried out by various types of organisations including non-government organisations (NGOs), networks and social movements, and research organisations. All the groups examined in the case studies share a commitment to increasing the influence of poor and marginalised groups in the budget process and ensuring that budget priorities reflect the needs of these groups.

The six organisations all engage in certain core activities centred on data analysis and dissemination, advocacy and capacity building. Most work on national and state-level budgets, though several groups also work at the local government level.

The research shows that independent budget work has the potential to deepen democracy by strengthening accountability, fostering transparency and encouraging participation. It can also increase financial allocations in areas that contribute to social justice and equity outcomes and ensure that public money is efficiently spent.

The research also reveals the limits to budget work. Any increases in financial allocations secured as a result of advocacy initiatives are likely to represent a small share of overall government spending. Also, the scope of budget work to influence financial allocations depends on the openness and flexibility of the budget process (spending priorities may not be open to change).

The impacts of budget work identified by the research include:

  • improving the transparency of budget decisions and budget processes and increasing the accountability of state actors
  • increasing awareness and understanding of budget issues
  • improving budget allocations in a way that benefits poor and socially excluded groups
  • ensuring better use of spending, for example in areas such as health and education, and reducing corruption (by tracking expenditures)
  • diversifying the range of actors engaged in budget processes (for example, legislators, civil society groups and the media)
  • strengthening democracy and deepening participation.

The research concludes that:

  • Budget work has been successful in a range of areas, including improving equity and social justice outcomes.
  • The technical nature of the budget process limits the scope for broadening citizen participation.
  • The challenge for budget groups is how to scale-up and replicate the successful impacts achieved to date.
  • Influencing budget policies requires a combination of sound technical knowledge, effective communications and strategic alliances.
  • Promoting the voice of poor and socially excluded groups is an important indirect effect of budget work.

Source(s):
‘Budget Analysis and Policy Advocacy: The Role of Non-governmental Public Action’, IDS Working Paper 279, IDS: Brighton, by Mark Robinson, 2006 Full document.

Funded by: UK Economic and Social Research Council

id21 Research Highlight: 16 August 2007

Further Information:
Mark Robinson
Policy and Research Division
UK Department for International Development (DFID)
1 Palace Street
London SW1E 5HE
UK

Tel: +44 (0)20 70230000
Fax: +44 (0)20 70230636
Contact the contributor: mark-robinson@dfid.gov.uk

Rick Davies’ comments posted on other blogs and websites

Results Based Management (RBM): A list of resources


CIDA website: Results-based Management

Results-based Management (RBM) is a comprehensive, life-cycle approach to management that integrates business strategy, people, processes, and measurements to improve decision-making and to drive change.

The approach focuses on getting the right design early in a process, implementing performance measurement, learning and changing, and reporting on performance.

  • RBM Guides
  • RBM Reports
  • Related Performance Sites

  • ADB website: Results Based Management Explained

    Results Based Management (RBM) can mean different things to different people. A simple explanation is that RBM is the way an organization is motivated and applies processes and resources to achieve targeted results.

    Results refer to outcomes that convey benefits to the community (e.g. Education for All (EFA), targets set in both Mongolia and Cambodia). Results also encompass the service outputs that make those outcomes possible (such as trained students and trained teachers). The term ‘results’ can also refer to internal outputs such as services provided by one part of the organization for use by another. The key issue is that results differ from ‘activities’ or ‘functions’. Many people when asked what they produce (services) describe what they do (activities).

    RBM encompasses four dimensions, namely:

    • specified results that are measurable, monitorable and relevant
    • resources that are adequate for achieving the targeted results
    • organizational arrangements that ensure authority and responsibilities are aligned with results and resources
    • processes for planning, monitoring, communicating and resource release that enable the organization to convert resources into the desired results.

    RBM may use some new words or apply specific meanings to some words in general usage. Check introduction to RBM presentation[PDF | 56 pages].

    RBM references that provide more background


    UNFPA website: Results-Based Management at UNFPA

    There is a broad trend among public sector institutions towards Results-Based Management–RBM. Development agencies, bilateral such as Canada, the Netherlands, UK, and the US as well as multilateral such as UNDP, UNICEF and the World Bank, are adopting RBM with the aim to improve programme and management effectiveness and accountability and achieve results.

    RBM is fundamental to the Fund’s approach and practice in fulfilling its mandate and effectively providing assistance to developing countries. At UNFPA, RBM means:

    • Establishing clear organizational vision, mission and priorities, which are translated into a four-year framework of goals, outputs, indicators, strategies and resources (MYFF);
    • Encouraging an organizational and management culture that promotes innovation, learning, accountability, and transparency;
    • Delegating authority and empowering managers and holding them accountable for results;
    • Focusing on achieving results, through strategic planning, regular monitoring of progress, evaluation of performance, and reporting on performance;
    • Creating supportive mechanisms, policies and procedures, building and improving on what is in place, including the operationalization of the logframe;
    • Sharing information and knowledge, learning lessons, and feeding these back into improving decision-making and performance;
    • Optimizing human resources and building capacity among UNFPA staff and national partners to manage for results;
    • Making the best use of scarce financial resources in an efficient manner to achieve results;
    • Strengthening and diversifying partnerships at all levels towards achieving results;
    • Responding to the realities of country situations and needs, within the organizational mandate.

    OECD report: RESULTS BASED MANAGEMENT IN THE DEVELOPMENT CO-OPERATION AGENCIES: A REVIEW OF EXPERIENCE BACKGROUND REPORT

    In order to respond to the need for an overview of the rapid evolution of RBM, the DAC Working Party on Aid Evaluation initiated a study of performance management systems. The ensuing draft report was
    presented to the February 2000 meeting of the WP-EV and the document was subsequently revised.
    It was written by Ms. Annette Binnendijk, consultant to the DAC WP-EV.

    This review constitutes the first phase of the project; a second phase involving key informant interviews in a number of agencies is due for completion by November 2001.

    158 pages, 12 page conclusion


    this list has a long way to go….!

    M&E blogs: A List

    • EvalThoughts, by Amy Germuth, Durham, NC, United States, President of EvalWorks, LLC a woman-owned small evaluation and survey research consulting business in Durham, NC.
    • Evaluation and Benchmarking. “This weblog is an on-line workspace for the whole of Victorian government Benchmarking Community of Practice.”
    • M&E Blog, by…?
    • Aid on the Edge of Chaos, by Ben Ramalingam
    • Design, Monitoring and Evaluation, by LARRY DERSHEM – Tbilisi, Georgia
    • Managing for Impact: About “Strengthening Management for Impact” for MFIs
    • Genuine Evaluation: “Patricia J Rogers and E Jane Davidson blog about real, genuine, authentic, practical evaluation”
    • Practical Evaluation, by Samuel Norgah
    • AID/IT M&E Blog: “…is written by Paul Crawford, and is part of a wider AID/IT website”
    • Blog: Evaluateca: Spanish language evaluation blog maintained by Rafael Monterde Diaz. Information, news, views and critical comments on Evaluation
    • Empowerment Evaluation Blog “This is a place for exchanges and discussions about empowerment evaluation practice, theory, and current debates in the literature” Run by  Dr. David Fetterman”
    • E-valuation: “constructing a good life through the exploration of value and valuing” by Sandra Mathison,Professor, Faculty of Education, University of British Columbia
    • Intelligent Measurement. This blog is created by Richard Gaunt in London and Glenn O’Neil in Geneva and focuses on evaluation and measurement in communications, training, management and other fields.
    • Managing for Impact: Let’s talk about MandE! “Welcome to the dedicated SMIP ERIL blog on M&E for managing for impact!An IFAD funded Regional Programme, SMIP (Strengthening Management for Impact) is working with pro-poor initiatives in eastern & southern Africa to build capacities to better manage towards impact. It does so through training courses for individuals, technical support to projects & programmes, generating knowledge, providing opportunities for on-the-job-training, and policy dialogue.”
    • MCA Monitor Blog “…is a part of CGD’s MCA Monitor Initiative, which tracks the effectiveness of the US Millennium Challenge Account. Sheila Herrling, Steve Radelet and Amy Crone, key members of CGD’s MCA Monitor team, contribute regularly to the blog. We encourage you to join the discussion by commenting on any post”
    • OutcomesBlog.Org “Dr Paul Duignan on real world strategy, outcomes, evaluation & monitoring Dr Paul Duignan is a specialist in outcomes, performance management, strategic decision making, evaluation and assessing research and evidence as the basis for decision making. He has developed the area of outcomes theory and its application in Systematic Outcomes Analysis, the outcomes software DoView and the simplified approach to his work Easy Outcomes. He works at an individual, organizational and societal level to develop ways of identifying and measuring outcomes which facilitate effective action. For a bio see here.
    • Rick on the Road: “Reflections on the monitoring and evaluation of development aid projects, programmes and policies, and development of organisation’s capacity to do the same. This blog also functions as the Editorial section of the MandE NEWS website
    • The Usable Blog “A blog on “Thoughts, ideas and resources for non-profit organizations and funders about the independent sector in general and program evaluation in particular” By Eric Graig “
    • The MSC Translations blog is maintained by Rick Davies, and is part of the MandE NEWS website. The purpose of this blog is:1. To make available translations of the MSC Guide in languages other than English. 2. To solicit and share comments on the quality of these translations, so they can be improved.The original English version can be found here The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use
    • Zen and the art of monitoring & evaluation “This blog is some of the rambling thoughts of Paul Crawford, a monitoring & evaluation (M&E) consultant for international aid organisations” Paul is based in Australia.

    And other lists of M&E blogs

    Monitoring government policies A toolkit for civil society organisations in Africa

    (identified via Source)

    The toolkit was produced by AFOD, Christian Aid, Trocaire

    This project was started by the three agencies with a view to supporting partner
    organisations, particularly church-based organisations, to hold their governments to
    account for the consequences of their policies. This toolkit specifically targets African

    partners, seeking to share the struggles and successes of partners already monitoring

    government policies with those that are new to this work.
    The development of this toolkit has been an in-depth process. Two consultants were
    commissioned to research and write the toolkit. They were supported by a reference group
    composed of staff from CAFOD, Christian Aid and Trócaire and partner organisations with
    experience in policy monitoring. The draft toolkit was piloted with partners in workshops
    in Malawi, Sierra Leone and Ethiopia. Comments from the reference group and the
    workshops contributed to this final version of the toolkit.

    Contents

    INTRODUCTION  1
    CHAPTER ONE: GETTING STARTED
    1.1  Core concepts in policy monitoring 5
    1.2  Identifying problems, causes and solutions 8
    1.3  Beginning to develop a monitoring approach 10
    Interaction  13
    CHAPTER TWO: CHOOSING POLICIES AND COLLECTING INFORMATION
    2.1  Different kinds of policies 15
    2.2  Which policies to monitor 18
    2.3  Access to policy information  22
    2.4  Collecting policy documents 24
    Interaction   27
    CHAPTER THREE: IDENTIFYING POLICY STAKEHOLDERS
    3.1  Stakeholders of government policies 29
    3.2  Target audiences and partners  31
    3.3  Monitoring by a network of stakeholders 34
    Interaction  37
    CHAPTER FOUR: LOOKING INTO A POLICY AND SETTING YOUR FOCUS
    4.1  Analysing the content of a policy 39
    4.2  Defining your monitoring objectives 42
    4.3  What kind of evidence do you need? 44
    4.4 Choosing indicators 47
    4.5  Establishing a baseline 50
    Interaction  52
    CHAPTER FIVE:ANALYSING POLICY BUDGETS
    5.1  Budget basics  55
    5.2  Resources for policy implementation 59
    5.3 Budget analysis 61
    5.4 Interaction  67

    CHAPTER SIX: GATHERING EVIDENCE ON POLICY IMPLEMENTATION
    6.1 Interviews  69
    6.2 Surveys 72
    6.3  Analysing survey data and other coded information 77
    6.4  Workshops, focus group discussions and observation 84
    Interaction  89
    CONCLUSION: USING POLICY EVIDENCE TO ADVOCATE FOR CHANGE
    Interaction  98
    RESOURCES AND CONTACTS 100