Livelihoods Monitoring and Evaluation: A Rapid Desk Based Study

Posted on 19 November, 2014 – 8:22 PM

by Kath Pasteur, 2014, 24 pages. Found here: http://www.evidenceondemand.info/livelihoods-monitoring-and-evaluation-a-rapid-desk-based-study

Abstract: “This report is the outcome of a rapid desk study to identify and collate the current state of evidence and best practice for monitoring and evaluating programmes that aim to have a livelihoods impact. The study identifies tried and tested approaches and indicators that can be applied across a range of livelihoods programming. The main focus of the report is an annotated bibliography of literature sources relevant to the theme. The narrative report highlights key themes and examples from the literature relating to methods and indicators. This collection of resources is intended to form the starting point for a more thorough organisation and analysis of material for the final formation of a Topic Guide on Livelihoods Indicators. This report has been produced by Practical Action Consulting for Evidence on Demand with the assistance of the UK Department for International Development (DFID) contracted through the Climate, Environment, Infrastructure and Livelihoods Professional Evidence and Applied Knowledge Services (CEIL PEAKS) programme, jointly managed by HTSPE Limited and IMC Worldwide Limited”

Full reference: Pasteur, K. Livelihoods monitoring and evaluation: A rapid desk based study. Evidence on Demand, UK (2014) 24 pp. [DOI: http://dx.doi.org/10.12774/eod_hd.feb2014.pasteur]

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Process evaluation of complex interventions. UK Medical Research Council (MRC) guidance

Posted on 10 November, 2014 – 7:22 PM

(copied from here: http://decipher.uk.net/process-evaluation-guidance/)

“Updated MRC guidance for evaluation of complex interventions published in 2008 (Craig et al.2008) highlighted the value of process evaluation within trials of complex interventions in order to understand implementation, the mechanisms through which interventions produce change, and the role of context in shaping implementation and effectiveness. However, it provided limited insight into how to conduct a good quality process evaluation.

New MRC guidance for process evaluation of complex interventions has been produced on behalf of the MRC Population Health Sciences Research Network by a group of 11 health researchers from 8 universities, in consultation with a wider stakeholder group. The author group was chaired by Dr Janis Baird, MRC Lifecourse Epidemiology Unit, University of Southampton. The development of the guidance was led by Dr Graham Moore, DECIPHer, Cardiff University.

The document begins with an introductory chapter which sets out the reasons why we need process evaluation, before presenting a new framework which expands on the aims for process evaluation identified within the 2008 complex interventions guidance (implementation, mechanisms of impact and context). It then presents discrete sections on process evaluation theory (Section A) and process evaluation practice (Section B), before offering a number of detailed case studies from process evaluations conducted by the authors (Section C).

The guidance has received endorsement and support from the MRC’s Population Health Science Group and Methodology Research Panel, as well as NIHR NETSCC. An abridged version will also follow shortly.

You can download the 2014 guidance (pdf) by clicking here.

An editorial in the BMJ explains why process evaluation is key to public health research, and why new guidance is needed. The editorial is available, open access, here.

If you have any queries, please contact Dr. Graham Moore: MooreG@cardiff.ac.uk.”

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Looking for case studies of beneficiary feedback in evaluation

Posted on 7 November, 2014 – 11:42 AM
[From Lesley Groves]
Dear MandE
I have been commissioned by the UK Department for International Development to produce a short practical note on incorporating beneficiary feedback within evaluation. I am exploring questions such as: How do we define beneficiary feedback in evaluation? How is it different from participatory evaluation/ participatory methods in evaluation? What is the added value? What are the practical implications (ethical/ logistical/ practical)? What is a reasonable requirement for beneficiary feedback in evaluation?
I would really welcome thoughts from this community on these questions. There are three ways in which I hope to engage with some of you
1) Through this e-list (https://groups.yahoo.com/neo/groups/MandENEWS/info)
2) Through my blog (http://beneficiaryfeedbackinevaluationandresearch.wordpress.com/)
3) Through email for those of you who may wish to contact me directly (lesliecgroves@gmail.com)
I am also looking for case study examples from anyone who has/is engaging beneficiary feedback mechanisms in evaluation. Would be great to hear from you.
With many thanks and best regards to all
Leslie
L?eslie Groves Williams (PhD)

Senior Social Development Consultant
Skype lesliegroves
 | http://www.linkedin.com/in/lesliegroves

Rick Davies comment: I understand Lesley is developing a bibliography which will be made available online, via her blog site. And I expect that her report, once accepted by DFID, will be made publicly available. If so, details will be posted here.
Some material that is already emerging via email list inquiries:
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

The use of Rasch scales in monitoring, gender analysis and attitude measurement

Posted on 25 October, 2014 – 1:38 AM

 

Monitoring systems occasionally incorporate elements of action research. A water and sanitation project in northern Bangladesh elaborated a Gender Analytic Framework to help organize public conversations on gender roles in household, village and local government. Trained volunteers facilitated sessions and recorded responses to 29 gender-related items in 988 villages over four years. “I myself went to see the Chairman!” analyzes the change in gender role attitudes with the help of Rasch scales. The tool achieves two things: It provides a summary attitude measure out of a large heterogeneous material. It thus makes gender attitudes amenable to analysis in terms of community baseline attributes, WatSan project inputs and pre-existing local attitudes.

Benini, Aldo, Reazul Karim et al.: “I myself went to see the Chairman!”- Change in gender role attitudes in a water and sanitation project in northern Bangladesh. An analysis of DASCOH’s Gender Analytical Framework data, 2011 – 2014. Rashahi and Sunamganj, Bangladesh: DASCOH – Development Association for Self-reliance, Communication and Health, 2014.

 

 

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Evidence of the Hawthorne effect – worth knowing about and watching out for

Posted on 16 October, 2014 – 2:30 PM

(copied from the World Bank “Development Impact” blog)

Quantifying the Hawthorne Effect

Submitted by Jed Friedman on 2014/10/16 This post is co-authored with Brinda Gokul

Many who work on impact evaluation are familiar with the concept of the Hawthorne effect and its potential risk to the accurate inference of causal impact. But if this is a new concept, let’s quickly review the definition and history of the Hawthorne effect:

  • The Hawthorne effect refers to study participants’ alteration of behavior solely as a result of being observed (rather than as a result of the intervention). Hence for the effect to exist it is necessary for the subjects to realize they are under observation. The term originates from the Western Electric Company’s Hawthorne Works Plant in Chicago where, in the late 1920s and early 1930s, researchers tried to study the effects of altered workplace lighting on worker productivity. It turned out that worker productivity improved when the lighting was increased, but also improved when the lighting was dimmed. Indeed it became apparent that whenever a change was implemented, such as a change in work hours, productivity improved for a period of time. The conclusion: productivity was not being affected by the changes in workplace conditions but instead by the self-knowledge of workers that they were under observation.

So the Hawthorne effect may present a challenge to the validity of causal inference (when agents respond to the knowledge they are being studied rather than respond to the changed environment as a result of the intervention) or may present a challenge to the accuracy of measurement (when the fact of observation alters the behavior measured). Clearly any effect magnitude, and indeed whether the effect arises at all, depends on the study context including the type of behavior observed. Yet only a handful of studies have attempted to identify and measure the Hawthorne effect.

My colleague Brinda Gokul and I recently reviewed the health economics and public health literature that explicitly study the effect in the general field of health. This is a hard question to get at, but some inventive studies, at times utilizing new technology, have given us some fascinating results. (It’s a fairly nascent literature, and at the bottom of this post we list the papers that we have found.)

With respect to behavior of health providers in developing countries, one of the more extensive studies in understanding the Hawthorne effect was conducted in the Arusha region of Tanzania and resulted in a series of papers by Kenneth Leonard and Melkiory Masatu. The challenge in measuring the Hawthorne effect is that we need to also have “stealth” data on subject behaviors when they are not aware of observation. The Tanzania’s study trick was to use patient recall interviews, conducted soon after the clinic visit, to reconstruct the actions of the clinicians and specifically their adherence to proper medical protocols. This stealth data is then compared with what is recorded by trained enumerators when they observe the clinician treating patients.

Of course the first step is to validate the accuracy of the “stealth” patient recall data, which the investigators do by comparing the enumerator observation record with patient recall data for the patient visits that were explicitly observed. It turns out there is a high degree of concurrence, with agreement between observer and patient on approximately 70% of the items measured.

Prior to the arrival of the research team, patient recall measured an average 53% baseline adherence to medical protocol by health providers, after the arrival of the research team, this adherence increased by almost 10 percentage points (or 20%). And it appears that the adherence to protocol actually matters, as cases that have higher adherence also have higher rates of accurate diagnosis and higher patient satisfaction. Of note is that the Hawthorne effect was rather short lived – providers returned to baseline level of adherence after 10-15 observed patient visits. I’ve been told of this same effect by education researchers who say that teachers habituate quite rapidly to video observation and return to “normal” levels of practice in within a day or so after the introduction of the camera.

Another example is from the Indian state of Orissa where investigators evaluating sanitation efforts to increase latrine use and reduce open defecation took a technological approach to the collection of stealth data: they developed a small hidden device to monitor latrine activity that recorded the times when the latrine door was opened. They called this device a PLUM – passive latrine use monitor. Amongst households with a latrine, the PLUM recorded members of the household using the latrine on average 2.11 times in the morning and 0.53 times in the afternoon.

Once this baseline data was collected, the study induced the Hawthorne effect by sending an observer to stand outside selected latrines for a five hour period in order to validate the passive monitor. It turned out that when a known observer was in place, household latrine compliance increased in the morning from 2.11 times to 2.71 and in the afternoon from 0.53 to 1.03 times. (Clasen, 2012).

For several healthcare studies in Europe, Canada and the United States, results also identified the Hawthorne effect. Here is a selection of findings:

  • By contrasting hand sanitation practices recorded discretely through a real-time location system (RLTS) – which recorded all dispensations from hand sanitizer stations as well as the movement of explicit hygiene auditors in an acute care hospital – researchers identified the Hawthorne effect in an acute care research hospital. The researchers recorded a rate of 3.75 dispensations per hour when the sanitizer station was visible to the auditor compared to 1.48 dispensations from a station not currently under observation. In contrast, there were only 1.07 dispensations in the week before observer arrival.
  • Another study of hand sanitation practices contrasted antiseptic hand rub (AHR) use among medical personal in five intensive care units both in the presence and absence of an announced observer of sanitary practice. When the official observer was absent, a trained covert observer – a research nurse part of the medical team – also recorded AHR use. The Hawthorne effect had a marked impact on AHR compliance: 29% baseline compliance and 45% compliance in the overt period. (Ruden)
  • The Hawthorne effect was induced in paramedic reporting practices solely by written notification of an upcoming quality-improvement audit, thus indicating that the effect does not always require direct observation or feedback. In this case, post audit announcement, the rate of incomplete paramedic reports declined by 20%. This finding also suggests, of course, a behavioral mechanism behind many Hawthorne effects – the perceived demand for performance. (Campbell)

Many of these reviewed studies look at small samples and are relatively short-term. So persistence of the observed effect is an important open question as well as the interaction between observation and the complexity of the behavior studied.

Here is the list of work attempting to quantify the Hawthorne effect that we have found for the health related field – please add to it if you know of others (in any field) – we’d be very grateful.

Some References

Campbell, JP, VA Maxey, WA Watson. “Hawthorne Effect: Implications for Pre-hospital Research.” Annals of Emergency Medicine, 26.5 (1995): 590-94.

Clasen T, Fabini D, Boisson S, Taneja J, Song J, Aichinger E, Bui A, Dadashi S, Schmidt W, Burt Z, Nelson K. “Making Sanitation Count: Developing and Testing a Device for Assessing Latrine Use in Low-Income Settings.” Environmental Science & Technology 46.6 (2012): 3295-3303.

De Amici, D, C Klersy, F Ramajoli, L Brustia, and P Politi. “Impact of the Hawthorne Effect in a Longitudinal Clinical Study: The Case of Anesthesia.” Controlled Clinical Trials 21 (2000): 103-14.

Eckmanns T, Bessert J, Behnke M, Gastmeier P, Ruden H. “Compliance With Antiseptic Hand Rub Use In Intensive Care Units: The Hawthorne Effect. Infection Control and Hospital Epidemiology”, 27 (2006): 931-934.

PH Feil, JS Grauer, CC Gadbury-Amyot, K Kula, MD McCunniff, “Intentional use of the Hawthorne effect to improve oral hygiene compliance in orthodontic patients,” Journal of Dental Education, 66 (2002): 1129-1135.

Grol, RP, WH Verstappen, T van der Weijden, G Riet. “Block Design Allowed For Control Of The Hawthorne Effect In A Randomized Controlled Trial Of Test Ordering.” Journal of Clinical Epidemiology, 57 (2004): 1119-1123.

Kohli E, Ptak J, Smith R, et al. “Variability in the Hawthorne effect with regard to hand hygiene practices: independent advantages of overt and covert observers.” PloS ONE, 8 (2013):353746

Leonard, KL. “Is patient satisfaction sensitive to changes in the quality of care? An exploitation of the Hawthorne effect.” Journal of Health Economics, 27 (2008): 444-459.

Leonard, KL, and MC Masatu. “Outpatient Process Quality Evaluation and the Hawthorne Effect.” Social Science & Medicine 63 (2006): 2330-340.

Leonard, KL, and MC Masatu. “Using the Hawthorne Effect to Examine the Gap between a Doctor’s Best Possible Practice and Actual Performance.” Journal of Development Economics 93.2 (2010): 226-34.

McCarney, R, J Warner, S Iliffe, R van Haselen, M Griffin, P Fisher, “The Hawthorne Effect: a Randomised Controlled Trial.” BMC Medical Research Methodology 7 (2007):  30

McGlynn, EA, R Mangione-Smith, M Elliott, & L McDonald. “An Observational Study of Antibiotic Prescribing Behavior and the Hawthorne Effect.” Health Services Research, 37 (2002), 1603-1623.

Fernald, DH, L Coombs, L DeAlleaume, D West, B Parnes. “An Assessment of the Hawthorne Effect in Practice-based Research.” The Journal of the American Board of Family Medicine, 25 (2012): 83-86.

Srigley, J, C Furness, G. Baker, M Gardam. “Quantification of the Hawthorne effect in hand hygiene compliance monitoring using an electronic monitoring system: A retrospective cohort study.” The International Journal of Healthcare Improvement, 10 (2014): 1-7.

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Why evaluations fail: The importance of good monitoring (DCED, 2014)

Posted on 17 September, 2014 – 5:04 PM

Adam Kessler and Jim Tanburn, August 2014, Donor Committee for Enterprise Development (DCED). 9 pages. Available as pdf.

Introduction:  A development programme without a strong internal monitoring system often cannot be effectively evaluated. The DCED Standard for Results Measurement is a widely-used monitoring framework, and this document discusses how it relates to external evaluations. Why should evaluators be interested in monitoring systems? How can the DCED Standard support evaluations, and vice versa? Who is responsible for what, and what are the expectations of each? This document expands previous work by the UK Department for International Development (DFID).

This document is relevant for evaluators, those commissioning evaluations, and practitioners in  programmes using the DCED Standard and undergoing an evaluation. It provides a basis for dialogue with the evaluation community; the aims of that dialogue are to identify sources of evaluation expertise
available to support programmes using the DCED Standard, and to promote the Standard to programmes needing to improve their monitoring system. We would welcome further discussions on the topic, and invite you to contact us at Results@Enterprise-Development.org with any questions or comments.

Contents
1 Introduction
2 Why should evaluators be interested in monitoring?
2.1 Good monitoring is essential for effective management
2.2 Good monitoring is essential for effective evaluation
2.3 Some evaluation methodologies incorporate monitoring
3 What is the DCED Standard for Results Measurement?
4 How does the DCED Standard support evaluation?
4.1 The DCED Standard promotes clear theories of change
4.2 The DCED Standard provides additional data to test the theory of change
5 How do evaluations supplement the DCED Standard?
5.1 Evaluations are independent
5.2 Evaluations have more expertise and larger budgets
5.3 Evaluations can examine broader effects
5.4 Evaluations and the DCED Standard are for different audiences
6 Division of responsibilities between evaluator and programme team
7 Key References and further reading

VN:F [1.9.22_1171]
Rating: 0 (from 2 votes)

DCED publications on M&E, M&E audits and the effectiveness of M&E standards

Posted on 17 September, 2014 – 10:47 AM

This posting is overdue. The Donor Committee for Enterprise Development (DCED) has been producing a lot of material on results management this year. Here are some of the items I have seen.

Of particular interest to me is the DCED Standard for Results Measurement. According to Jim Tanburn, there are about 60-70 programmes now using the standard. Associated with this is an auditing service offered by DCED. From what I can see nine programmes have been audited so far. Given the scale and complexity of the standards, the question in my mind, and probably that of others, is whether their use makes a significant difference to the performance of the programmes that have implemented the standards.Are they cost-effective?

This would not be an easy question to answer in any rigorous fashion, I suspect. There are likely to be many case-specific accounts of where and how the standards have helped improve performance, and perhaps some of where they have have not helped or even hindered. Some accounts are already available via the Voices from the Practitioners part of the DCED website.

The challenge would be how to aggregate judgements about impacts on a diverse range of programmes in a variety of settings. This is the sort of situation where one is looking for the “effects of a cause”, rather than “the causes of an effect”, because there is a standard intervention (adoption of the standards) but it is one which may have many different effects. A three step process might be feasible, or at least worth exploring:

1. Rank programmes in terms of the degree to which they have successfully adopted the standards. This should be relatively easy, given that there is a standard auditing process

2. Rank programmes in terms of the relative observed/reported effects of the standards. This will be much more difficult because of the apple and pears  nature of the impacts. But I have been exploring a way of doing so here: Pair comparisons: For where there is no common outcome measure? Another difficulty, which may be surmountable, is that “all the audit reports remain confidential and DCED will not share the contents of the audit report without seeking permission from the audited programmes”.

3. Look for the strength and direction of the correlation between the two measures, and for outliers (poor adoption/big effects, good adoption/few effects) where lessons could be learned.

 

VN:F [1.9.22_1171]
Rating: +1 (from 3 votes)

Reframing the evidence debates: a view from the media for development sector

Posted on 9 August, 2014 – 5:48 PM

Abraham-Dowsing, Kavita, Anna Godfrey, and Zoe Khor. 2014. “Reframing the Evidence Debates: A View from the Media for Development Sector”. BBC Media Action. Available as pdf. This is part of BBC Media Action’s Bridging Theory and Practice series. An accompanying appendices document is available here. It includes priority research questions, and more detail on the evidence examples cited in the paper. The report was prepared with funding from the UK Department for International Development.

Introduction : “Donors, policy-makers and practitioners need evidence  to inform their policy and programming choices, resource allocation and spending decisions, yet producing and making use of high-quality research and evidence is not straightforward. This is particularly the case in sectors that do not have a long history of research or evaluation, that are operating in fragile states with low research capacity and that are trying to bring about complex change. The media for development sector (see Box 1) is one such example. Nonetheless, donors, governments and private foundations working in international development have long recognised the importance of independent media and information sources in their work and the role that communication can play in bringing about change. Despite this recognition, however, in debates around evidence on the role of media and communication in achieving development communication in achieving development outcomes, assertions of “no evidence” or “not enough evidence” are commonplace. With the evidence agenda gaining more prominence in the development sector, there is a risk for any sector that finds it difficult to have a clear, concise and cohesive narrative around its evidence of impact.

This paper is based on a series of interviews with practitioners, evaluators and donors working in the media for development sector, and looks at their understanding of what counts as evidence and their views on the existing evidence base. It argues that compelling evidence of impact does exist and is being used – although this varies by thematic area. For example, it highlights that evidence in the area of health communication is stronger and more integrated into practice compared with other thematic areas such as media and governance or  humanitarian response outcomes. The paper also contends that, alongside evidencing development outcomes (for example, media’s impact on knowledge, attitudes, efficacy, norms and behaviours), more evidence is needed to answer specific questions about how, why and in what ways media and communication affect people and societies – and how this varies by local context.

The paper argues that the lack of clear evidential standards for reporting evidence from media for development programmes, the limited efforts to date to collate and systematically review the evidence that does exist, and the lack of relevant fora in which to critique and understand evaluation findings, are significant barriers to evidence generation. The paper calls for an “evidence agenda”, which creates shared evidential standards to systematically map the existing evidence, establishes fora to discuss and share existing evidence, and uses strategic, longer-term collaborative investment in evaluation to highlight where evidence gaps need to be filled in order to build the evidence base. Without such an agenda, as a field, we risk evidence producers, assessors and funders talking at cross purposes. ”

As the paper’s conclusion states, we actively welcome conversations with you and we expect that these will affect and change the focus of the evidence agenda. We also expect to be challenged! What we have tried to do here is articulate a clear starting point, highlighting the risk of not taking this conversation forward. We actively welcome your feedback during our consultation on the paper which runs from August until the end of October 2014, and invite you to share the paper and appendices widely with any colleagues and networks who you think appropriate.

Contents page
Introduction
1. What is evidence? An expert view
2. Evidence – what counts and where are there gaps?
3. Building an evidence base – points for consideration
4. The challenges of building an evidence base
5. An evidence agenda – next steps in taking this conversation forward
Conclusion
Appendix 1: Methodology and contributors
Appendix 2: Examples of compelling evidence
Appendix 3: Priority research questions for the evidence agenda
Appendix 4: A note on M&E, R & L and DME
Appendix 5: Mixed methods evaluation evidence – farmer field schools
Appendix 6: Methodological challenges

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

CDI conference proceedings: Improving the use of M&E processes and findings

Posted on 17 July, 2014 – 12:12 PM

“On the 20th and 21st of March 2014 CDI organized her annual ‘M&E on the cutting edge’ conference on the topic: ‘Improving the Use of M&E Processes and Findings’.

This conference is part of our series of yearly ‘M&E on the cutting edge’ events. The conference was held on the 20th and 21st of March 2014 in Wageningen, the Netherlands. This conference particularly looked at under what conditions the use of M&E processes and findings can be improved. The conference report can now be accessed here in pdf format

Conference participants had the opportunity to learn about:

  • frameworks to understand utilisation of monitoring and evaluation findings and process;
  • different types of utilisation of monitoring and evaluation process and findings, when and for whom these are relevant;
  • conditions that improve utilisation of monitoring and evaluation processes and findings.

Conference presentations can be found online here: http://www.managingforimpact.org/event/cdi-conference-improving-use-me-processes-and-findings

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Gender, Monitoring, Evaluation and Learning – 9 new articles in pdfs

Posted on 10 July, 2014 – 10:27 AM
…in Gender & Development Volume 22, Issue 2, July 2014   Gender, Monitoring, Evaluation and Learning
“In this issue of G&D, we examine the topic of Gender, Monitoring, Evaluation and Learning (MEL) from a gender equality and women’s rights perspective, and hope to prove that a good MEL system is an activist’s best friend! This unique collection of articles captures the knowledge of a range of development practitioners and women’s rights activists, who write about a variety of organisational approaches to MEL. Contributors come from both the global South and the global North and have tried to share their experience accessibly, making what is often very complex and technical material as clear as possible to non-MEL specialists.”

Contents

The links below will take you to the article abstract on the Oxfam Policy & Practice website, from where you can download the article for free.

Editorial

Introduction to Gender, Monitoring, Evaluation and Learning
Kimberly Bowman and Caroline Sweetman

Articles

Women’s Empowerment Impact Measurement Initiative
Nidal Karim, Mary Picard, Sarah Gillingham and Leah Berkowitz
A review of approaches and methods to measure economic empowerment of women
and girls
Paola Pereznieto and Georgia Taylor

Helen Lindley

Capturing changes in women’s lives: the experiences of Oxfam Canada in applying
feminist evaluation principles to monitoring and evaluation practice
Carol Miller and Laura Haylock
A survivor behind every number: using programme data on violence against women
and girls in the Democratic Republic of Congo to influence policy and practice
Marie-France Guimond and Katie Robinette

Learning about women’s empowerment in the context of development projects: do the figures tell us enough?

Jane Carter, Sarah Byrne, Kai Schrader, Humayun Kabir, Zenebe Bashaw
Uraguchi, Bhanu Pandit, Badri Manandhar, Merita Barileva, Norbert Pijls & Pascal Fendrich

Resources

Compiled by Liz Cooke

Resources List – Gender, Monitoring, Evaluation and Learning

 

VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)