The Fallacy of AI Functionality

 

Evaluators should have a basic working knowledge of how to evaluate algorithms used to manage human affairs (law, finance, social services, etc) because algorithm designs embody human decisions and can have large scale consequences. For this reason I recommend:

Raji ID, Kumar IE, Horowitz A, et al. (2022) The Fallacy of AI Functionality. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul Republic of Korea, 21 June 2022, pp. 959–972. ACM. DOI: 10.1145/3531146.3533158.
Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on “ethical” or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all. To describe the harms of various types of functionality failures, we analyze a set of case studies to create a taxonomy of known AI functionality issues. We then point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus. We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm.

CONTENTS
1. Introduction
2. Related work
3. The functionality assumption
4. The many dimensions of disfunction
4.1 Methodology
4.2 Failure taxonomy
4.2.1 Impossible Tasks
Conceptually Impossible.
Practically Impossible
4.2.2 Engineering Failures
Model Design Failures
Model Implementation Failures
Missing Safety Features
4.2.3 Deployment Failures
Robustness Issues
Failure under Adversarial Attacks
Unanticipated Interactions
4.2.4 Communication Failures
Falsified or Overstated Capabilities
Misrepresented Capabilities
5 DEALING WITH DYSFUNCTION: OPPORTUNITIES FOR INTERVENTION ON FUNCTIONAL SAFETY
5.1 Legal/Policy Interventions
5.1.1 Consumer Protection
5.1.2 Products Liability Law.
5.1.3 Warranties
5.1.4 Fraud
5.1.5 Other Legal Avenues Already Being Explored
5.2 Organizational interventions
5.2.1 Internal Audits & Documentation.
5.2.2 Product Certification & Standards
5.2.3 Other Interventions
6 CONCLUSION : THE ROAD AHEAD

Making impact evaluation matter: Better evidence for effective policies and programmes

Asian Development Bank, Manila, 1-5 September 2014

The Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie) are hosting a major international impact evaluation conference Making Impact Evaluation Matter from 1-5 September 2014 in Manila. The call for proposals to present papers and conduct workshops at the conference is now open.

Making Impact Evaluation Matter will comprise pre-conference workshops for 2.5 days from 1-3 September 2014, and 2.5 days of the conference from 3-5 September. Major international figures in the field of impact evaluation are being invited to speak at the plenary sessions of the conference. There will be five to six streams of pre-conference workshops and up to eight streams of parallel sessions during the conference, allowing for over 150 presentations.

Proposals are now being invited for presentations on any aspect of impact evaluations and systematic reviews, including findings, methods and translation of evidence into policy. Researchers are welcome to submit proposals on the design (particularly innovative designs for difficult to evaluate interventions), implementation, findings and use of impact evaluations and systematic reviews. Policymakers and development programme managers are welcome to submit proposals on the use of impact evaluation and systematic review findings.

Parallel sessions at the conference will be organised around the following themes/sectors: (a) infrastructure (transport, energy, information and communication technology, urban development, and water), (b) climate change/ environment/ natural resources, (c) social development (health, education, gender equity, poverty and any other aspect of social development),  (d) rural development (agriculture,  food security and any other aspect of rural development),  (e)  financial inclusion, (f) institutionalisation of impact evaluation, and incorporating impact evaluation or systematic reviews into institutional appraisal and results frameworks, (g) impact evaluation of institutional and policy reform (including public management and governance), (h) impact evaluation methods, and (g) promotion of the use of evidence.

Workshop proposals are being invited on all aspects of designing, conducting and disseminating findings from impact evaluations and systematic reviews. The workshops can be at an introductory, intermediate or advanced level.  The duration of a workshop can vary from half a day to two full days.

All proposals must be submitted via email to : conference2014@3ieimpact.org with email subject line ‘Proposal: presentation’ or ‘Proposal: workshop’. The proposal submission deadline is 3 July 2014.

Bursaries are available for participants from low- and middle-income countries. Employees of international organisations are however not eligible for bursaries (except the Asian Development Bank). A bursary will cover return economy airfare and hotel accommodation. All other expenses (ground transport, visa, meals outside the event) must be paid by the participant or their employer. Bursary applications must be made through the conference website: www.impactevaluation2014.org. The deadline for bursary applications is 15 July 2014.

Non-sponsored participants are required to pay a fee of US$250 for participating in the conference or US$450 for participating in the pre-conference workshops as well as the conference. Those accepted to present a workshop will be exempted from the fee.

For more information on the submission of proposals for the conference, read the Call for Proposals.

For the latest updates on Making Impact Evaluation Matter, visit www.impactevaluation2014.org

Queries may be sent to conference2014@3ieimpact.org.
Copyright © 2014 International Initiative for Impact Evaluation (3ie), All rights reserved.
You are receiving this email because you have subscribed to the 3ie mailing list.

Our mailing address is:
International Initiative for Impact Evaluation (3ie)

2nd Floor, East Wing, ISID Complex,
Plot No. 4, Vasant Kunj Institutional Area

New Delhi 110070

India

International Energy Policies & Programmes Evaluation Conference (IEPPEC) conference 9-11 September 2014

– the leading event for energy policy and programme evaluators

Sharing and Accelerating the Value and Use of Monitoring, Reporting and Verification Practices.

There are a wide range of regional, national and international policies and programmes designed to achieve improved energy efficiency, and therefore reductions in GHG emissions and reductions in living costs. These are top priorities for bodies such as the EU, IEA and UN in addressing the critical issues of climate change, resource conservation and living standards.

The increasing focus on this policy area has resulted in more challenging objectives and intended outcomes for interventions, along with growing investment. But are we investing correctly?

Pioneering approaches to evaluating investments and policy decisions related to energy efficiency will be at the forefront of presentations and debate at the IEPPEC, held in Berlin between the 9th and 11th of September 2014.

The conference presents an unparalleled opportunity to bring together policy and evaluation practitioners, academics and others from around the world involved in evaluation of energy and low carbon policies and programs. Attendees will be able to debate the most effective means of assuring that both commercial and community-based approaches to improving the sustainability of our energy use and making our economies more efficient are based on common metrics that can be compared across regions and regulatory jurisdictions. The focus over the three day conference is for policy makers, program managers and evaluators to share ideas for improving the assessment of potential and actual impacts of low carbon policies and programmes, and to facilitate a deeper understanding of evaluation methods that work in practice.

The conference features:

•          Presentation of over 85 full and peer-reviewed evaluation papers by their authors

•          Four panel discussions

•          Two keynote sessions

•          A two-day poster exhibit

·               Lots of opportunity to share learning and network with other attendees

The conference is filling up fast, so to avoid disappointment, please book your place now by visiting http://www.iepec.org.

Additional information:

       For the draft conference agenda, please click here

       Refreshments, breakfasts and lunches are provided.

       For any further information, please visit http://www.iepec.org

Conference: Next Generation Evaluation: Embracing Complexity, Connectivity and Change

“On Nov. 14th 2013, FSG and Stanford Social Innovation Review convened Next Generation Evaluation: Embracing Complexity, Connectivity and Change to discuss emerging ideas that are defining the future of social sector evaluation. The Conference brought together nearly 400 participants to learn about the trends driving the need for evaluation to evolve, the characteristics and approaches that represent Next Generation Evaluation, and potential implications for the social sector.”

The conference website provides 8 video records of presentations and pdf copies of many more.

Introducing Next Generation Evaluation, Hallie Preskill, Managing Director, FSG  Introducing Next Generation Evaluation

Developmental Evaluation: An Approach to Evaluating Complex Social Change Initiatives, Kathy Brennan, Research and Evaluation Advisor, AARP  Developmental Evaluation: An Approach to Evaluating Complex Social Change Initiatives

Shared Measurement: A Catalyst to Drive Collective Learning and Action , Patricia Bowie, Consultant, Magnolia Place Community Initiative,  Shared Measurement: A Catalyst to Drive Collective Learning and Action

Using Data for Good: The Potential and Peril of Big Data, Lucy Bernholz, Visiting Scholar, Center on Philanthropy and Civil Society, Stanford University, Using Data for Good: The Potential and Peril of Big Data

Frontiers of Innovation: A Case Study in Using Developmental Evaluation to Improve Outcomes for Vulnerable Children, James Radner, Assistant Professor, University of Toronto,  Frontiers of Innovation: A Case Study in Using Developmental Evaluation to Improve Outcomes for Vulnerable Children

Project SAM: A Case Study in Shared Performance Measurement For Community Impact , Sally Clifford, Program Director, Experience Matters
Tony Banegas, Philanthropy Advisor, Arizona Community Foundation, Project SAM: A Case Study in Shared Performance Measurement For Community Impact

UN Global Pulse: A Case Study in Leveraging Big Data for Global Development, Robert Kirkpatrick, Director, UN Global Pulse, UN Global Pulse: A Case Study in Leveraging Big Data for Global Development

Panel: Implications for the Social Sector (“So What?”), Presenters: Lisbeth Schorr, Senior Fellow, Center for the Study of Social Policy; Fay Twersky, Director, Effective Philanthropy Group, The William and Flora Hewlett Foundation; Alicia Grunow, Senior Managing Partner, Design, Development, and Improvement Research, Carnegie Foundation for the Advancement of Teaching Moderator: Srik Gopalakrishnan, Director, FSG

Small Group Discussion: Implications for Individuals and Organizations (“Now What?”) , Moderator: Eva Nico, Director, FSG, Embracing Complexity, Connectivity, and Change , Brenda Zimmerman, Professor, York University,  Embracing Complexity, Connectivity, and Change

DCED Global Seminar on Results Measurement 24-26 March 2014, Bangkok

Full text available here: http://www.enterprise-development.org/page/seminar2014

“Following popular demand, the DCED is organising the second Global Seminar on results measurement in the field of private sector development (PSD), 24-26 March 2014 in Bangkok, Thailand. The Seminar is being organised in cooperation with the ILO and with financial support from the Swiss State Secretariat for Economic Affairs (SECO). It will have a similar format to the DCED Global Seminar in 2012, which was attended by 100 participants from 54 different organisations, field programmes and governments.

Since 2012, programmes and agencies have been adopting the DCED Standard for results measurement in increasing numbers; recently, several have published the reports of their DCED audit. This Seminar will explore what is currently known, and what we need to know; specifically, the 2014 Seminar is likely to be structured as follows:

  • An introduction to the DCED, its Results Measurement Working Group, the DCED Standard for results measurement and the Standard audit system
  • Insights from 10 programmes experienced with the Standard, based in Bangladesh, Cambodia, Fiji, Georgia, Kenya, Nepal, Nigeria and elsewhere (further details to come)
  • Perspectives from development agencies on results measurement
  • Cross cutting issues, such as the interface between the Standard and evaluation, measuring systemic change, and using results in decision-making
  • A review of the next steps in learning, guidance and experience around the Standard
  • Further opportunities for participants to meet each other, learn about each others’ programmes and make contacts for later follow-up

You are invited to join the Seminar as a participant. Download the registration form here, and send to Admin@Enterprise-Development.org. There is a fee of $600 for those accepted for participation, and all participants must pay their own travel, accommodation and insurance costs. Early registration is advised.”

AEA Conference: Evaluation in Complex Ecologies

Relationships, Responsibilities, Relevance
26th Annual Conference of the American Evaluation Association
Minneapolis, Minnesota, USA
Conference: October 24-27, 2012
Workshops: October 22, 23, 24, 28

“Evaluation takes place in complex global and local ecologies where we evaluators play important roles in building better organizations and communities and in creating opportunities for a better world. Ecology is about how systems work, engage, intersect, transform, and interrelate. Complex ecologies are comprised of relationships, responsibilities, and relevance within our study of programs, policies, projects, and other areas in which we carry out evaluations.

Relationships. Concern for relationships obliges evaluators to consider questions such as: what key interactions, variables, or stakeholders do we need to attend to (or not) in an evaluation? Evaluations do not exist in a vacuum disconnected from issues, tensions, and historic and contextualized realities, systems, and power dynamics. Evaluators who are aware of the complex ecologies in which we work attend to relationships to identify new questions and to pursue new answers. Other questions we may pursue include:

  • Whose interests and what decisions and relationships are driving the evaluation context?
  • How can evaluators attend to important interactions amidst competing interests and values through innovative methodologies, procedures, and processes?

Responsibilities. Attention to responsibilities requires evaluators to consider questions such as: what responsibilities, inclusive of and beyond the technical, do we evaluators have in carrying out our evaluations? Evaluators do not ignore the diversity of general and public interests and values in evaluation. Evaluations in complex ecologies make aware ethical and professional obligations and understandings between parties who seek to frame questions and insights that challenge them. Other questions we may pursue include:

  • How can evaluators ensure their work is responsive, responsible, ethical, equitable, and/or transparent for stakeholders and key users of evaluations?
  • In what ways might evaluation design, implementation, and utilization be responsible to issues pertinent to our general and social welfare?

Relevance. A focus on relevance leads to evaluations that consider questions such as: what relevance do our evaluations have in complex social, environmental, fiscal, institutional, and/or programmatic ecologies? Evaluators do not have the luxury of ignoring use, meaning, of sustainability; instead all evaluations require continual review of purposes, evaluands, outcomes, and other matters relevant to products, projects, programs, and policies. Other questions we may pursue include:

  • How can evaluators ensure that their decisions, findings, and insights are meaningful to diverse communities, contexts, and cultures?
  • What strategies exist for evaluators, especially considering our transdisciplinary backgrounds, to convey relevant evaluation processes, practices, and procedures?

Consider this an invitation to submit a proposal for Evaluation 2012 and join us in Minneapolis as we consider evaluation in complex ecologies where relationships, responsibilities, and/or relevance are key issues to address.”

 

Conference: Measuring Impact of Higher Education for Development

From: Monday 19th March 2012 to Tuesday 20th March 2012

Venue:  Birkbeck College, Malet Street, London

Organisers: London International Development Centre (LIDC); Association of Commonwealth Universities (ACU)

Background: Higher education for international development has been, in recent years, a neglected area relative to other educational interventions. Yet higher education (HE) is necessary for the attainment of Millennium Development Goals (MDGs) and for economic development in low and middle income countries.

There is a long history of development assistance interventions in HE to support development goals, directed at strengthening individual, organisational and institutional capacity. These have included scholarship programmes as well as support to specific universities and university networks in low and middle income countries, and support to academic research and training partnerships.
However, there has been little comparison of these different interventions in terms of their international development impact. This limits our understanding of “what works” in HE interventions for development, and our ability to invest effectively in future.
The aim of this two-day international conference is to examine the current status of impact evaluation for HE interventions and to identify research gaps and needs for the future. The conference will focus on three issues:
  • What has been, and should be, the development intention of HE interventions?
  • How should development impacts be measured?
  • What is our experience with measurement methods and tools to date, where are the gaps and what research priorities emerge?

The programme will be posted online soon.

Who should attend:

The conference will bring together experts from three research sectors: higher education, international development and impact evaluation from academia, think tanks, government agencies and civil society organisations. PhD students are welcome if their research is relevant to the theme of the conference.

Registration is open between 2 February and 5 March 2012.
To register, please fill in and return the registration form.
Attendance is free of charge.

Conference: Evaluation in a Complex World -Balancing Theory and Practice

April 29- May 1, 2012 (Sunday-Tuesday)
Seaview Resort, Galloway, NJ, USA. (http://www.dolce-seaview-hotel.com)

Organised by the Eastern Evaluation Research Society, a Regional Affiliate of the American Evaluation Association. Flyer available here

Keynote Speaker: Jennifer Greene, University of Illinois and President of AEA Featured Speakers: Eleanor Chelimsky, U.S. Government Accountability Office and former AEA President Rodney Hopson, Dusquesne University and incoming President of AEA

Sunday Afternoon Pre-Conference Workshops and Session: Meta Analysis Ning Rui, Research for Better Schools

Focus Group Research: Planning and Implementation Michelle Revels, ICF International

Career Talk with the Experts (NEW!): An unstructured conversation about your evaluation career This session is free to participants! Sunday Evening Interactive & Networking Session: John Kelley, Villanova University Concurrent Sessions Featuring: Skill Building Sessions, Individual Presentations & Panel Sessions

A full conference program will be posted at (www.eers.org) by Mid February 2012.

Conference about “The Future of Evaluation in Modern Societies”, Germany.

“The Center for Evaluation (CEval) of Saarland University, Germany, is a globally active research institute for applied social science in the field of evaluation and member of the DeGEval (German Evaluation Society). On this occasion, we organize an international conference about “The Future of Evaluation in Modern Societies” on 14th and 15th June 2012 in Saarbruecken, Germany.

The objective of this event is to discuss the role of evaluation in societies comprehensively and on an international comparison for bringing different discussion strands together into a joint debate. For keynote speeches and lectures, we could already win numerous renowned scientists from the USA, Latin America, Africa and Europe.

Please find the detailed program and registration form on our homepage: http://futureofevaluation.ceval.de

You also find a review about our recent book “A Practioner Handbook on Evaluation” which will appeal to evaluation practitioners, policy-makers who conduct evaluations in their daily work, students training in applied research and organizations which are implementing projects and programs that could be the subject of an evaluation.

—————————————

Maria Albrecht,  M.A., Center for Evaluation (CEval), Saarland University, P.O. Box 15 11 50, 66041 Saarbrücken – Germany, Fon: +49 (0)681 302-3561, Fax: +49 (0)681 302-3899, www.ceval.de

UKES CONFERENCE 2012 Evaluation for results: What counts? Who stands to gain? How is it done?

16 March 2012
The Macdonald Hotel, Birmingham

[from UKES website] UKES conferences address leading issues of the day in programme and policy evaluation. The 2012 Annual Conference will address the current drive towards evaluation focused on results – frequently linked to ‘Payment by Results’ and what, in international development and elsewhere, is familiar as ‘Results-Based Management’.

Evaluators and those who commission evaluation who advocate a focus on results reflect a legitimate concern with the productivity and efficiency of programmes and the capacity of interventions to secure gains and improvements in practice and provision. They point out that programmes should be held to account to accomplish what they were designed to do and paid for, often out of public funds. A primary focus on results seeks to emphasise main effects and outcomes that have been valued and agreed. In times of austerity and unusually scarce resources, proponents of a strong focus on results argue that emphasising value for money is socially responsible.

Others argue that an over-emphasis on measuring a programme’s results neglects important questions of how results are generated in a context, whether results capture the real quality and accomplishments of a programme, and how those results may reflect the values and ambitions of all programme stakeholders. They remind us of secondary effects and ‘unintended beneficiaries’ of programmes that may not be readily captured by results. Some also raise questions about the source of criteria over what counts as a worthwhile result given that not all programme achievements can be measured, and stakeholders may differ over a programme’s objectives. 

Against this background conference participants are invited to contribute their own perspectives on the dominant issues they consider relevant to the theory and practice of evaluation in the public interest. We anticipate a lively and informative debate to stimulate professional learning and to contribute to the improvement of evaluation practice and commissioning.

Potential contributors are invited to propose discussions, seminar presentations, lectures or poster sessions which explore issues around this theme. Those issues may fall within one of the following categories – though you are invited to propose your own theme:?

  • How do we define a valid ‘result’ and whose results get counted?
  • How do we best measure a result – including taking account of counterfactuals?
  • How do we understand where results came from, what significance they have and whether they can be replicated – i.e. what is the relation between a result and context?
  • Where do benchmarks come from to measure results achievement?
  • If a result is, say, a 4% improvement – how do we know whether that is a lot or a little under the circumstances?
  • How do we represent the circumstances and mechanisms that give rise to a result?
  • How do we account for programme accomplishments that are not represented in results?
  • Is results-measurement a robust foundation for replication/extension of a programme?

A formal call for papers and proposals for sessions will be circulated shortly.  The conference will be preceded on 15 March 2012 with a choice of training workshops on specialist topics.

%d bloggers like this: