The Fallacy of AI Functionality

 

Evaluators should have a basic working knowledge of how to evaluate algorithms used to manage human affairs (law, finance, social services, etc) because algorithm designs embody human decisions and can have large scale consequences. For this reason I recommend:

Raji ID, Kumar IE, Horowitz A, et al. (2022) The Fallacy of AI Functionality. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul Republic of Korea, 21 June 2022, pp. 959–972. ACM. DOI: 10.1145/3531146.3533158.
Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on “ethical” or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all. To describe the harms of various types of functionality failures, we analyze a set of case studies to create a taxonomy of known AI functionality issues. We then point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus. We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm.

CONTENTS
1. Introduction
2. Related work
3. The functionality assumption
4. The many dimensions of disfunction
4.1 Methodology
4.2 Failure taxonomy
4.2.1 Impossible Tasks
Conceptually Impossible.
Practically Impossible
4.2.2 Engineering Failures
Model Design Failures
Model Implementation Failures
Missing Safety Features
4.2.3 Deployment Failures
Robustness Issues
Failure under Adversarial Attacks
Unanticipated Interactions
4.2.4 Communication Failures
Falsified or Overstated Capabilities
Misrepresented Capabilities
5 DEALING WITH DYSFUNCTION: OPPORTUNITIES FOR INTERVENTION ON FUNCTIONAL SAFETY
5.1 Legal/Policy Interventions
5.1.1 Consumer Protection
5.1.2 Products Liability Law.
5.1.3 Warranties
5.1.4 Fraud
5.1.5 Other Legal Avenues Already Being Explored
5.2 Organizational interventions
5.2.1 Internal Audits & Documentation.
5.2.2 Product Certification & Standards
5.2.3 Other Interventions
6 CONCLUSION : THE ROAD AHEAD

Free Coursera online course: Qualitative Comparative Analysis (QCA)

Highly recommended! A well organised and very clear and systematic exposition. Available at: https://www.coursera.org/learn/qualitative-comparative-analysis

About this Course

Welcome to this massive open online course (MOOC) about Qualitative Comparative Analysis (QCA). Please read the points below before you start the course. This will help you prepare well for the course and attend it properly. It will also help you determine if the course offers the knowledge and skills you are looking for.

What can you do with QCA?

  • QCA is a comparative method that is mainly used in the social sciences for the assessment of cause-effect relations (i.e. causation).
  • QCA is relevant for researchers who normally work with qualitative methods and are looking for a more systematic way of comparing and assessing cases.
  • QCA is also useful for quantitative researchers who like to assess alternative (more complex) aspects of causation, such as how factors work together in producing an effect.
  • QCA can be used for the analysis of cases on all levels: macro (e.g. countries), meso (e.g. organizations) and micro (e.g. individuals).
  • QCA is mostly used for research of small- and medium-sized samples and populations (10-100 cases), but it can also be used for larger groups. Ideally, the number of cases is at least 10.
  • QCA cannot be used if you are doing an in-depth study of one case

What will you learn in this course?

  • The course is designed for people who have no or little experience with QCA.
  • After the course you will understand the methodological foundations of QCA.
  • After the course you will know how to conduct a basic QCA study by yourself.

How is this course organized?

  • The MOOC takes five weeks. The specific learning objectives and activities per week are mentioned in appendix A of the course guide. Please find the course guide under Resources in the main menu.
  • The learning objectives with regard to understanding the foundations of QCA and practically conducting a QCA study are pursued throughout the course. However, week 1 focuses more on the general analytic foundations, and weeks 2 to 5 are more about the practical aspects of a QCA study.
  • The activities of the course include watching the videos, consulting supplementary material where necessary, and doing assignments. The activities should be done in that order: first watch the videos; then consult supplementary material (if desired) for more details and examples; then do the assignments. • There are 10 assignments. Appendix A in the course guide states the estimated time needed to make the assignments and how the assignments are graded. Only assignments 1 to 6 and 8 are mandatory. These 7 mandatory assignments must be completed successfully to pass the course. • Making the assignments successfully is one condition for receiving a course certificate. Further information about receiving a course certificate can be found here: https://learner.coursera.help/hc/en-us/articles/209819053-Get-a-Course-Certificate

About the supplementary material

  • The course can be followed by watching the videos. It is not absolutely necessary yet recommended to study the supplementary reading material (as mentioned in the course guide) for further details and examples. Further, because some of the covered topics are quite technical (particularly topics in weeks 3 and 4 of the course), we provide several worked examples that supplement the videos by offering more specific illustrations and explanation. These worked examples can be found under Resources in the main menu. •
  • Note that the supplementary readings are mostly not freely available. Books have to be bought or might be available in a university library; journal publications have to be ordered online or are accessible via a university license. •
  • The textbook by Schneider and Wagemann (2012) functions as the primary reference for further information on the topics that are covered in the MOOC. Appendix A in the course guide mentions which chapters in that book can be consulted for which week of the course. •
  • The publication by Schneider and Wagemann (2012) is comprehensive and detailed, and covers almost all topics discussed in the MOOC. However, for further study, appendix A in the course guide also mentions some additional supplementary literature. •
  • Please find the full list of references for all citations (mentioned in this course guide, in the MOOC, and in the assignments) in appendix B of the course guide.

 

 

Making impact evaluation matter: Better evidence for effective policies and programmes

Asian Development Bank, Manila, 1-5 September 2014

The Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie) are hosting a major international impact evaluation conference Making Impact Evaluation Matter from 1-5 September 2014 in Manila. The call for proposals to present papers and conduct workshops at the conference is now open.

Making Impact Evaluation Matter will comprise pre-conference workshops for 2.5 days from 1-3 September 2014, and 2.5 days of the conference from 3-5 September. Major international figures in the field of impact evaluation are being invited to speak at the plenary sessions of the conference. There will be five to six streams of pre-conference workshops and up to eight streams of parallel sessions during the conference, allowing for over 150 presentations.

Proposals are now being invited for presentations on any aspect of impact evaluations and systematic reviews, including findings, methods and translation of evidence into policy. Researchers are welcome to submit proposals on the design (particularly innovative designs for difficult to evaluate interventions), implementation, findings and use of impact evaluations and systematic reviews. Policymakers and development programme managers are welcome to submit proposals on the use of impact evaluation and systematic review findings.

Parallel sessions at the conference will be organised around the following themes/sectors: (a) infrastructure (transport, energy, information and communication technology, urban development, and water), (b) climate change/ environment/ natural resources, (c) social development (health, education, gender equity, poverty and any other aspect of social development),  (d) rural development (agriculture,  food security and any other aspect of rural development),  (e)  financial inclusion, (f) institutionalisation of impact evaluation, and incorporating impact evaluation or systematic reviews into institutional appraisal and results frameworks, (g) impact evaluation of institutional and policy reform (including public management and governance), (h) impact evaluation methods, and (g) promotion of the use of evidence.

Workshop proposals are being invited on all aspects of designing, conducting and disseminating findings from impact evaluations and systematic reviews. The workshops can be at an introductory, intermediate or advanced level.  The duration of a workshop can vary from half a day to two full days.

All proposals must be submitted via email to : conference2014@3ieimpact.org with email subject line ‘Proposal: presentation’ or ‘Proposal: workshop’. The proposal submission deadline is 3 July 2014.

Bursaries are available for participants from low- and middle-income countries. Employees of international organisations are however not eligible for bursaries (except the Asian Development Bank). A bursary will cover return economy airfare and hotel accommodation. All other expenses (ground transport, visa, meals outside the event) must be paid by the participant or their employer. Bursary applications must be made through the conference website: www.impactevaluation2014.org. The deadline for bursary applications is 15 July 2014.

Non-sponsored participants are required to pay a fee of US$250 for participating in the conference or US$450 for participating in the pre-conference workshops as well as the conference. Those accepted to present a workshop will be exempted from the fee.

For more information on the submission of proposals for the conference, read the Call for Proposals.

For the latest updates on Making Impact Evaluation Matter, visit www.impactevaluation2014.org

Queries may be sent to conference2014@3ieimpact.org.
Copyright © 2014 International Initiative for Impact Evaluation (3ie), All rights reserved.
You are receiving this email because you have subscribed to the 3ie mailing list.

Our mailing address is:
International Initiative for Impact Evaluation (3ie)

2nd Floor, East Wing, ISID Complex,
Plot No. 4, Vasant Kunj Institutional Area

New Delhi 110070

India

International Energy Policies & Programmes Evaluation Conference (IEPPEC) conference 9-11 September 2014

– the leading event for energy policy and programme evaluators

Sharing and Accelerating the Value and Use of Monitoring, Reporting and Verification Practices.

There are a wide range of regional, national and international policies and programmes designed to achieve improved energy efficiency, and therefore reductions in GHG emissions and reductions in living costs. These are top priorities for bodies such as the EU, IEA and UN in addressing the critical issues of climate change, resource conservation and living standards.

The increasing focus on this policy area has resulted in more challenging objectives and intended outcomes for interventions, along with growing investment. But are we investing correctly?

Pioneering approaches to evaluating investments and policy decisions related to energy efficiency will be at the forefront of presentations and debate at the IEPPEC, held in Berlin between the 9th and 11th of September 2014.

The conference presents an unparalleled opportunity to bring together policy and evaluation practitioners, academics and others from around the world involved in evaluation of energy and low carbon policies and programs. Attendees will be able to debate the most effective means of assuring that both commercial and community-based approaches to improving the sustainability of our energy use and making our economies more efficient are based on common metrics that can be compared across regions and regulatory jurisdictions. The focus over the three day conference is for policy makers, program managers and evaluators to share ideas for improving the assessment of potential and actual impacts of low carbon policies and programmes, and to facilitate a deeper understanding of evaluation methods that work in practice.

The conference features:

•          Presentation of over 85 full and peer-reviewed evaluation papers by their authors

•          Four panel discussions

•          Two keynote sessions

•          A two-day poster exhibit

·               Lots of opportunity to share learning and network with other attendees

The conference is filling up fast, so to avoid disappointment, please book your place now by visiting http://www.iepec.org.

Additional information:

       For the draft conference agenda, please click here

       Refreshments, breakfasts and lunches are provided.

       For any further information, please visit http://www.iepec.org

Conference: Next Generation Evaluation: Embracing Complexity, Connectivity and Change

“On Nov. 14th 2013, FSG and Stanford Social Innovation Review convened Next Generation Evaluation: Embracing Complexity, Connectivity and Change to discuss emerging ideas that are defining the future of social sector evaluation. The Conference brought together nearly 400 participants to learn about the trends driving the need for evaluation to evolve, the characteristics and approaches that represent Next Generation Evaluation, and potential implications for the social sector.”

The conference website provides 8 video records of presentations and pdf copies of many more.

Introducing Next Generation Evaluation, Hallie Preskill, Managing Director, FSG  Introducing Next Generation Evaluation

Developmental Evaluation: An Approach to Evaluating Complex Social Change Initiatives, Kathy Brennan, Research and Evaluation Advisor, AARP  Developmental Evaluation: An Approach to Evaluating Complex Social Change Initiatives

Shared Measurement: A Catalyst to Drive Collective Learning and Action , Patricia Bowie, Consultant, Magnolia Place Community Initiative,  Shared Measurement: A Catalyst to Drive Collective Learning and Action

Using Data for Good: The Potential and Peril of Big Data, Lucy Bernholz, Visiting Scholar, Center on Philanthropy and Civil Society, Stanford University, Using Data for Good: The Potential and Peril of Big Data

Frontiers of Innovation: A Case Study in Using Developmental Evaluation to Improve Outcomes for Vulnerable Children, James Radner, Assistant Professor, University of Toronto,  Frontiers of Innovation: A Case Study in Using Developmental Evaluation to Improve Outcomes for Vulnerable Children

Project SAM: A Case Study in Shared Performance Measurement For Community Impact , Sally Clifford, Program Director, Experience Matters
Tony Banegas, Philanthropy Advisor, Arizona Community Foundation, Project SAM: A Case Study in Shared Performance Measurement For Community Impact

UN Global Pulse: A Case Study in Leveraging Big Data for Global Development, Robert Kirkpatrick, Director, UN Global Pulse, UN Global Pulse: A Case Study in Leveraging Big Data for Global Development

Panel: Implications for the Social Sector (“So What?”), Presenters: Lisbeth Schorr, Senior Fellow, Center for the Study of Social Policy; Fay Twersky, Director, Effective Philanthropy Group, The William and Flora Hewlett Foundation; Alicia Grunow, Senior Managing Partner, Design, Development, and Improvement Research, Carnegie Foundation for the Advancement of Teaching Moderator: Srik Gopalakrishnan, Director, FSG

Small Group Discussion: Implications for Individuals and Organizations (“Now What?”) , Moderator: Eva Nico, Director, FSG, Embracing Complexity, Connectivity, and Change , Brenda Zimmerman, Professor, York University,  Embracing Complexity, Connectivity, and Change

DCED Global Seminar on Results Measurement 24-26 March 2014, Bangkok

Full text available here: http://www.enterprise-development.org/page/seminar2014

“Following popular demand, the DCED is organising the second Global Seminar on results measurement in the field of private sector development (PSD), 24-26 March 2014 in Bangkok, Thailand. The Seminar is being organised in cooperation with the ILO and with financial support from the Swiss State Secretariat for Economic Affairs (SECO). It will have a similar format to the DCED Global Seminar in 2012, which was attended by 100 participants from 54 different organisations, field programmes and governments.

Since 2012, programmes and agencies have been adopting the DCED Standard for results measurement in increasing numbers; recently, several have published the reports of their DCED audit. This Seminar will explore what is currently known, and what we need to know; specifically, the 2014 Seminar is likely to be structured as follows:

  • An introduction to the DCED, its Results Measurement Working Group, the DCED Standard for results measurement and the Standard audit system
  • Insights from 10 programmes experienced with the Standard, based in Bangladesh, Cambodia, Fiji, Georgia, Kenya, Nepal, Nigeria and elsewhere (further details to come)
  • Perspectives from development agencies on results measurement
  • Cross cutting issues, such as the interface between the Standard and evaluation, measuring systemic change, and using results in decision-making
  • A review of the next steps in learning, guidance and experience around the Standard
  • Further opportunities for participants to meet each other, learn about each others’ programmes and make contacts for later follow-up

You are invited to join the Seminar as a participant. Download the registration form here, and send to Admin@Enterprise-Development.org. There is a fee of $600 for those accepted for participation, and all participants must pay their own travel, accommodation and insurance costs. Early registration is advised.”

INTRAC M&E Workshop: Practical Responses to Current Monitoring and Evaluation Debates

A one-day workshop for M&E practitioners, civil society organisations and development agencies to debate and share their experiences. There is an increased pressure on NGOs to improve their M&E systems and often to move out of their methodological comfort zone to meet new requirements from donors and stakeholders. This event will examine the challenges faced by NGOs and their responses around four themes:

  •  Designing and using baselines for complex programmes
  • Using Information and Communications Technology (ICT) in M&E
  • Experimental and quasi-experimental methods in M&E, including randomised control trials
  • M&E of advocacy

Download the overview paper.

Call for M&E case studies

We are looking for short case studies focusing on one or more of the four event themes (see above). The case studies will be shared and will form the basis of the discussions at the workshop.

*Deadline for abstracts (max. 500 words): Friday 13 September 2013*

Please email abstracts to research@intrac.org

Event bookings

Event cost: £80 (£60 early bird booking before 19 October 2013)

Please return the booking form to zwilkinson@intrac.org

 

Webinar series on evaluation: The beginnings of a list

To be extended and updated, with your help!

  • American Evaluation Association: Coffee Break Demonstrations are 20 minute long webinars designed to introduce audience members to new tools, techniques, and strategies in the field of evaluation.
  • INTERACTION: Impact Evaluation Guidance Note and Webinar Series: 8 webinars covering Introduction to Impact Evaluation, Linking Monitoring and Evaluation to Impact Evaluation, Introduction to Mixed Methods in Impact Evaluation, Use of Impact Evaluation Results
  • Measure Evaluation webinars:     20 webinars since Jan 2012
  • Claremont Evaluation Center Webinar Series  “The Claremont Evaluation Center is pleased to offer a series of webinars on the discipline and profession of evaluation.  This series is free and available to anyone across the globe with an internet connection.”
  • MY M&E website: Webinars on Equity-focused evaluations (17 webinars), IOCE webinar series on evaluation associations, Emerging practices in development evaluation (6 webinars), Developing capacities for country M&E systems (16 webinars), Country-led M&E Systems (6 webinars)

Plus some guidance on developing and evaluating webinars

ICAI Seeks Views on Revised Evaluation Framework

 

 “In our first report, ICAI’s Approach to Effectiveness and Value for Money,we set out an evaluation framework, consisting of 22 questions under 4 guiding criteria (objectives, delivery, impact and learning), to guide our lines of enquiry in reviews. In the light of our experience to date in carrying out our reports, we have reviewed this framework. The revised framework is available at this link: ICAI revised evaluation framework

We are now entering a period of consultation on the revised framework which will run until 24 May 2013. If you have any comments or views, please email enquiries@icai.independent.gov.uk  or post them to: The Secretariat, Independent Commission for Aid Impact, Dover House, 66 Whitehall, London SW1A 2AU”

Open consultation Triennial review of the Independent Commission for Aid Impact (ICAI)

(Website that hosts the text below)

This consultation closes on 26 April 2013

On 21 March the Government announced the triennial review of the Independent Commission for Aid Impact (ICAI) and is seeking views of stakeholders who wish to contribute to the Review. Triennial Reviews of Non-Departmental Public Bodies (NDPBs) are part of the Government’s commitment to review all NDPBs, with the aim of increasing accountability for actions carried out on behalf of the State.

The ICAI’s strategic aim is to provide independent scrutiny of UK aid spending, to promote the delivery of value for money for British taxpayers and to maximise the impact of aid.

The Review will be conducted in line with Cabinet Office principles and guidance, in two stages.

The first stage will:

  • Identify and examine the key functions of the ICAI and assess how these functions contribute to the core business of DFID;
  • Assess the requirement for these functions to continue given other scrutiny processes;
  • If continuing, assess how the key functions might best be delivered; if one of these options is continuing delivery through the ICAI, then make an assessment against the Government’s “three tests”: technical function; political impartiality; and the need for independence from Ministers.

If the outcome of stage one is that delivery should continue through the ICAI, the second stage of the review will:

  • Review whether ICAI is operating in line with the recognised principles of good corporate governance, using the Cabinet Office “comply or explain” standard approach.

In support of these aims we would welcome input and evidence from stakeholders, focused on these main questions:

ICAI’s functions

For the purposes of this review, we have defined ICAI’s key functions as follows:

  • Produce a wide range of independent, high quality/professionally credible and accessible reports (including evaluations, VfM reviews, investigations) setting out evidence of the impact and value for money of UK development efforts;
  • Work with and for Parliament to help hold the UK Government to account for its development programme, and make information on this programme available to the public;
  • Produce appropriately targeted recommendations to be implemented/ followed up by HMG.

Which of these functions do you think are still needed? What would be the impact if ICAI ceased to exist?

Would you define ICAI’s functions differently?

Do you think any of the following delivery mechanisms would be more appropriate or cost effective at delivering these functions: Local government, the voluntary sector, private sector, another existing body or DFID itself?

To date, do you think ICAI has focused on scrutinising UK aid spend or the wider HMG development effort? What do you think it should be doing?

Where do you think ICAI sits on the spectrum between audit and research? Is this where they should be?

How far can and should ICAI have a role in holding HMG to account?

Production of reports

What is the quality of ICAI reports? Is the expertise of those producing the reports appropriate? How does this compare to other scrutiny bodies that you know of?

How far does the methodology used by ICAI add value to other scrutiny of DFID programmes (eg IDC, NAO, DFID internal)?

How far does ICAI involve beneficiaries in its work?

What impact have ICAI reviews had on DFID staff and resources?

How independent do you believe ICAI is? How important do you think this independence is for ICAI’s ability to deliver its functions effectively?

How much of an impact do you think the Commissioners have on ICAI recommendations and reports? What added value do you think they bring? Do they have the right skillset?

Making information available to the IDC and the public

How important do you think ICAI’s role is in providing information about UK development to taxpayers?

What impact has ICAI had on public perceptions of UK development?

Production of targeted recommendations

What has been the added value of ICAI’s recommendations? How do these compare to other scrutiny bodies that you know of?

How far and why have recommendations been followed up?

What impact has ICAI had on DFID’s own approach to monitoring impact and value for money?

How far has ICAI promoted lesson learning in DFID?

General

Do you think ICAI could improve? If so, how do you think ICAI could improve?

Do you have any other comments?

The government is seeking views of stakeholders on the Triennial Review of the Independent Commission for Aid Impact (ICAI).

Contact us by 26 April 2013

Write to us:

email
post
ICAI Review Team KB 2.2
22 Whitehall
London
SW1A 2EG
%d bloggers like this: