The purpose of this page
…is to record some ongoing reflections on my experience of running two pre-tests of ParEvo carried out in late 2018 and early 2019.
Participants and others are encouraged to add their own comments, by using the Comment facility at the bottom of this page
Two pre-tests are underway
- One involves 11 participants developing a scenario involving the establishment of an MSC (Most Significant Change) process in a development programme in Nigeria. These volunteers were found via the MSC email list. They came from 7 countries and 64% were women.
- The other involves 11 participants developing a Brexit scenario following Britain failing to reach an agreement with the EU by March 2019. These participants were found via the MandE NEWS email list. They came from 9 countries and 46% were women.
For more background (especially if you have not been participating) see this 2008 post on the process design and this 2019 Conference abstract talking about these pre-tests
Reflections so far
Issues arising…
- How many participants should there be?
- In the current pre-tests, I have limited the number to around 10. My concern is that with larger numbers there will be too many story segments (and their storylines) for people to scan and make a single preferred selection. But improved methods of visualising the text contributions may help overcome this limitation. Another option is to allow/encourage individual participants to represent teams of people, e.g. different stakeholder groups. I have not yet tried this out.
- Do the same participants need to be involved in each iteration of the process?
- My initial concern is that not doing so would make some of the follow up quantitative analysis more difficult, but I am not so concerned about that now, its a manageable problem. On the other hand, it is likely that some people will have to drop out mid-process, and ideally, they could be replaced by others, thus maintaining the diversity of storylines.
- How do you select an appropriate topic for a scenario planning exercise?
- Ideally, it would be a topic that was of interest to all the participants and one which they felt some confidence in talking about, even if only in terms of imagined futures. One pre-test topic, the use of MSC in Nigeria, was within these bounds. But the other was more debatable: the fate of the UK after no resolution of BREXIT terms by 29th March 2019
- How should you solicit responses from participants?
- I started by sending a standard email to all the (MSC scenario) participants, but this has been cumbersome and has risks. It is too easy to lose track of who contributed what text, to add to what existing storyline. I am now using two-part single question survey via SurveyMonkey. This enables me to keep a mistake-free record of who contributed what to what, and who has responded and who has not. But this still involves sending multiple communications, including reminders, and I have sometimes confused what I am sending to whom. A more automated systems is definitely needed.
- How should you represent and share participants responses?
- This has been done in two forms. One is a tree diagram, showing all storylines, where participants can mouseover nodes to immediately see each text segment. Or they can click on each node to go to a separate web page and see complete storylines. These are both laborious to construct, but hopefully will soon be simplified and automated via some tech support which is now under discussion. PS: I have now resorted to only using the tree diagram with mouseover.
- Should all contributions be anonymous?
- There are two types of contributions: (a) the storyline segments contributed during each iteration of the process, (b) Comments made on these contributions, that can be enabled on the blog page that hosts each full storyline to date. This second type was an afterthought, whereas the first is central to the process.
- The first process of contributing to storylines designed to make authorship anonymous, so people would focus on the contents. I think this remains a good feature.
- The second process of allowing people to comment has pros and cons. The advantage is that it can enrich the discussion process, providing a meta-level to the main discussion which is the storyline development. The risk, however, is that if the comments are not enabled to be anonymous then a careful reader of the comments can sometimes work out who made which storyline contributions. I have tried to make comments anonymous but they still seem to reveal the identity of the person making the comment. This may be resolvable. PS: This option is now not available, while I am only using the tree diagram to show storylines. This may need to be changed.
- How many iterations should be completed?
- It has been suggested that participants should know this in advance, so that their story segments don’t leap in the future too quickly, or the reverse, progress the story too slowly. With the Brexit scenario pre-test I am inclined to agree. It might help to saying at the beginning that there will be 5 iterations, ending in the year 2025. With the MSC scenario pre-test I am less certain, it seems to be moving on at a pace I would not have predicted
- I am now thinking it may also be useful to spell out in advance the number of iterations that will take place. And perhaps even suggest each one will represent a given increment in time, say a month or a year, or…
- What limits should there be on the length of the text that participants submit?
- I have really wobbled on this issue, ranging from 100-word limits to 50-word limits to no voiced limits at all. Perhaps when people select which storyline to continue the length of the previous contributions will be something they take into account? I would like to hear participants views on this issue. Should there be word limits, and if so, what sort of limit?
- What sort of editorial intervention should there be by the facilitator, if any?
- I have been tempted, more than once, to ask some participants to reword and revise their contribution. I now limit myself to very basic spelling corrections, checked with the participant, if necessary. I was worried that some participants have a limited grasp of the scenario topic, but now think that just has to be part of the reality, some people have little to go on when anticipating specific the future, and others may have “completely the wrong idea”, according to others. As the facilitator, I now think I need to stand back and let things run.
- Another thought I had some time ago is that the facilitator could act as the spokesperson for “the wider context”, including any actors not represented by any of the participant’s contributions so far. At the beginning of a new iteration, they could provide some contextual text that participants are encouraged to bear in mind when designing their next contribution. If so, how / where should this context information be presented?
- How long should a complete exercise take?
- The current pre-tests are stretching out over a number of weeks. But I think this will be an exception. In a workshop setting where all participants (or teams of) have access to a laptop and internet, it should be possible to move through a quite a few iterations within a couple of hours. In other non-workshop settings perhaps a week will be long enough, if all participants have a stake in the process. Compacting the available time might generate more concentration and focus. The web app now under development should also radically reduce the turnaround time between iterations because manual work done by the facilitator will be automated.
- Is my aim to have participants evaluate the completed storylines realistic?
- After the last iteration, I plan to ask each participant, probably via an online survey page, to identify: (a) the most desirable storyline, (b) the most likely to happen storyline. But I am not sure if this will work. Will participants be willing to read every storyline from beginning to end? Or will they make judgments on the basis of the last addition to each storyline, which they will be more familiar with? And how much will this bias their judgments (and how could I identify if it does)?
- What about the contents??
- One concern I have is the apparent lack of continuity between some of the contributions to a storyline. Is this because the participants are very diverse? Or because I have not stressed the importance of continuity? Or because I can’t see the continuity that others can see?
- What else should we look for when evaluating the content as a whole? One consideration might be the types of stakeholders who are represented or referred to, and those which seem to be being ignored
- How should performance measures be used?
- Elsewhere I have listed a number of ways of measuring and comparing how people contribute and how storylines are developed. Up to now, I have thought of this primarily as a useful research tool, which could be used to analyze storylines after they have been developed.
- But after reading a paper on “gamification” of scenario planning it occurred to me that some of these measures could be more usefully promoted at the beginning of a scenario planning exercise, as measures that participants should be aware of and even seek to maximize when deciding how and where to contribute. For example, one measure is the number of extensions that have been added to a participant’s texts by other participants, and the even distribution of those contributions (known as variety and balance).
- Stories as predictions
- Most writers on scenario planning emphasize that scenarios are not meant to be predictions, but more like possibilities that need to be planned for
- But if ParEvo was used in a M&E context, could participants be usefully encouraged to write story segments as predictions, and then be rewarded in some way if they came true? This would probably require an exercise to focus on the relatively near future, say a year or two at the most, with each iteration perhaps only covering a month or so.
- Tagging of story segments
- It is common practice to use coding / tagging of text contents in other settings. Would it be useful with ParEvo? An ID tag is already essential, to be able to identify and link story segments.
- What other issues are arising and need discussion?
- Over to you…to comment below
- I also plan to have one to one skype conversations with participants, to get your views on the process and products
To prune or not to prune …
Tree growers — fruit trees, bonsais — often prune their trees to get a more productive or a more beautiful tree. Should the same be acceptable for a “scenario tree”? Is it best to let the stories develop completely ‘wild’? Or should the gardener have the power to intervene and cut off branches which are less likely to develop nicely, or those which will be less able to carry the fruit that tree is expected to produce in future? It could e.g. be that two branches are shooting out close to each other (i.e. rather similar scenarios), and instead of spending energy growing both of them in parallel, one of the two could be pruned so that the other one can grow more vigorously. A pruning like that is probably best done early on.
Some pruning could also happen later on, when there is a branch that started to develop, but it did not branch off any further. This latter pruning is perhaps more for aesthetics.
Limit on length of submitted text.
I prefer the “… up to two sentences” stipulation, rather than a 50- or 100-word limit. I also found it easier to develop a continuation on a longer storyline than a shorter one, probably because a longer one may already have more specificity and detail.
How should storylines be allowed to develop? After the second iteration should the third iteration only continue on the Storylines 1.x that had additional branches? Or could a participant’s third iteration be another possible development on a Storyline 1.x (but a different one s/he had picked before)? The analogy is that in a tree the new shoots are not always at the outer end of the tree; there can be new sprouts on some older branches as well.
It would result in a “fuller” scenario tree, but it would also mean that some storylines are more extended than others.
Does anyone have sufficient experience of the Delphi Method to add to any of these comments. Delphi isn’t quite so scenario based, and other aspects are radically different, but some of the rule structures might be relevant even if the actual rule is different.
Thanks Bob.
There are a series of papers that review the literature on scenario planning, which I am accumulating and will make available in the form of a Zotero bibliography online (no subs or sign-ups needed)
Hi Bob. The Wikipedia entry on Delphi is quite helpful (https://en.wikipedia.org/wiki/Delphi_method). Delphi, like ParEvo, involves iterations. It is also based on anonymous contributions. But there are two important differences I think. One is that in Delphi there seems to be a greater emphasis on developing a consensus view, versus the continued maintenance of diversity in the ParEvo process. The other is that the facilitator in the Delphi process seems to have a more central role, collating and aggregated participants views. Whereas in ParEvo the process is more decentralized and participants choice of which storyline to add to are more influential in shaping the whole product
How many participants should there be?
There are many ways to overcome your concern of “My concern is that with larger numbers there will be too many story segments (and their storylines) for people to scan and make a single preferred selection.” by scaling the process. One of which is what is known as fractal engagement. Limit the set of stories exposed to a certain group of participants. Therefore, in the first iteration, different groups will work on different set of stories. In the next iteration, exchange the stories sets among the different groups. The random mixing and matching of stories to participants continue for several iterations until there is a general consensus that each participant is happy with his or her selection.
the word limit is important for developing a story. While some stories are more descriptive and explain the stages in story development, others are restricted due to word limit. Thus, it is not fair to evaluate and select stories on merit. Example, 1.3.1.1 has highlighted an important aspect of the process – training local people for establishing credibility. Story 1.9.1.1 gives details of establishing scoring system and helping coordinators to develop template. Thus, is it difficult to reject 1,3.1.1 as it has limitation of words.
the number of iterations should be known in advance to decide the pace of story development
thanks Rick for the opportunity to participate. I have always enjoyed your work, but I am sorry to say that my feedback is mostly not very positive.
Maybe I am not up to speed on this method, but the whole process for me lacked clarity of purpose – what were we trying to achieve – for what ends were we developing these story lines. This was not clear for me and therefore the process was a bit meaningless. I lacked clarity. I would have preferred a skype call with the group first to introduce what you are trying to achieve and briefed us on the methodology and process.
I agree with the suggestions above about more intervention and ‘shaping’ from the facilitator.
Best wishes, Rob
I agree with the comment above, about the purpose of the story line. Brexit… can be so many things. It can be the normative side – (UK centric? / EU centric?), It can be the social one… There are just too many angles (and probably different profiles in the group) to have a sense of purpose. Also… who would be the audience? An MP? A person working to organize social services? Narrative were really diverse, coming from very different points of views.