On February 23rd, the Stanford Social Innovation Review asked its readers to predict the results of two randomised control trials (RCTs), before they become publicly available. Both studies “tested whether consulting services can help enterprises grow. In other words, with nothing more than advice, can small firms or microenterprises increase their profits? Or are they already optimizing, given their resources?”
The website provides some background information on both interventions and the aims of each study. It also provides four different possible outcomes of the study, for participants to choose from. A modest prize is offered for participants who correctly predict the study findings.
The authors provide this description of their intentions: ” With this experiment, we also are taking a baby step toward a more ambitious idea—to have a market in predicting the results of randomized trials. Such a market would serve two purposes. First, it would allow stakeholders to stake their claim (pun intended) on their predictions and be held to acclaim when they are right or to have their opinions challenged when they are wrong. Second, such a market could help donors, practitioners, and policymakers make decisions about poverty programs, by engaging the market’s collective wisdom. (Think www.intrade.com, but for results of social impact interventions.)”
The last sentence seems to imply that the market, correctly designed and managed, will deliver successful predictions. This has been found to be the case in some other fields, but it may or may not be the case with the results of RCT trials.
There is another potentially valuable use of the same process. A “pre-dissemination of results” survey would establish a baseline measure of public understanding in the field under investigation [with the caveat that the profile of the particular participating” public” would need to be made clear]. For example, 30% of survey participants may have successfully predicted that Outcome 1 would be supported by the RCT findings. After the RCT findings were shared with participants a follow survey of the same participants could easily then ask something like “Do you accept the validity of the findings?” or some thing more general like “Have these results been sufficient to change your mind on this issue?” The percentage of participants who made wrong predictions but accepted the study results would then be a reasonable measure of immediate impact. [Fortunately the SSIR survey includes a request for participant email addresses, which are necessary if they are to receive their prize].
Bearing this in mind, it would be good if the Review could provide its readers with some analysis of the overall distribution of the predictions made by participants, not just information on who the winner was.
PS: The same predict-disclose-compare process can also be used in face to face settings such as workshops designed to disseminate the findings of impact assessments, and has undoubtedly beeen used by others before today [including by myself with Proshika staff in Bangladesh, many years ago]
[Thanks to @carolinefiennes for alerting me to this article]
PS 14 March 2012: See Posting Hypotheses for an Impact Study of Compartamos by Dean Karlan where one of his objectives is to be able to compare found results with prior opinions