Explanations in Artificial Intelligence Decision Making: A User Acceptance Perspective

Explanations in Artificial Intelligence Decision Making: A User Acceptance Perspective

Norman G. Vinson, Heather Molyneaux, Joel D. Martin
DOI: 10.4018/978-1-5225-9069-9.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The opacity of AI systems' decision making has led to calls to modify these systems so they can provide explanations for their decisions. This chapter contains a discussion of what these explanations should address and what their nature should be to meet the concerns that have been raised and to prove satisfactory to users. More specifically, the chapter briefly reviews the typical forms of AI decision-making that are currently used to make real-world decisions affecting people's lives. Based on concerns about AI decision making expressed in the literature and the media, the chapter follows with principles that the systems should respect and corresponding requirements for explanations to respect those principles. A mapping between those explanation requirements and the types of explanations generated by AI decision making systems reveals the strengths and shortcomings of the explanations generated by those systems.
Chapter Preview
Top

Introduction

Artificial Intelligence (AI) systems are moving out of the laboratory and into the real world at an extremely rapid pace. As they do, the decisions they make affect people’s lives in diverse domains such as medical treatments and diagnoses, hiring and promotions, loans and the interest rates borrowers pay, prison sentences and so on (Pasquale, 2015; Roshanov et al., 2013; Wexler, 2018). This adoption has produced a corresponding increase in ethical concerns about how these decisions are made, and for good reason, as these decisions can be inaccurate or unacceptable (Guidotti et al., 2018). Such concerns have led to calls for AI systems to explain their decisions (Campolo, Sanfilippo, Whittaker, & Crawford, 2017; Edwards & Veale, 2017; Guidotti et al., 2018; Molnar & Gill, 2018; Wachter, Mittelstadt, & Russell, 2018). In this context, this chapter provides testable principles to which such explanations should conform to be acceptable to users of AI systems; both to the people who use the AI systems to generate decisions and to those who are subject to the decisions. If users do not find AI decisions-making system explanations acceptable, they will either not use the systems to generate decisions or refuse to comply with, or protest those decisions (Guidotti et al., 2018)1.

Because the acceptability of AI explanations to users is a nascent area of research, the authors did not perform a systematic review of the topic. Rather, the authors searched broadly for journal articles and grey literature that might be relevant to the issue, using keywords such as “ethic*”, “explanation*” and “artificial intelligence”. Backward snowballing (Wohlin, 2014) led to other relevant articles. A review of the articles resulted in the proposal of three principles to which AI decision-making systems (AIDMS) should conform to support user acceptance. The principles were formulated to be testable, that is, to allow experiments to reveal whether explanations that conform to these principles are preferred by users, and/or are sufficient for users to accept the corresponding decisions. These proposed principles are fairness, contestability, and competence.

Fairness requires the equitable treatment of various groups of people and reasonable processing of relevant data. Contestability refers to the ability of a person about whom a decision is made to contest that decision. Competence requires the AIDMS’ decision-making performance to meet a certain quality threshold. While these principles do not refer directly to AIDMS explanations, they do generate requirements for such explanations to respect the principles. To illustrate how the principles relate to explanations, the principles are coupled with real world cases identified through backward snowballing (Wohlin, 2014) of the articles and reports collected and through Google searches. Such case-based illustrations are a common educational approach in many fields (Johnson et al., 2012).

These three testable principles constitute this chapter’s primary contribution to the field. The authors make a secondary contribution through their discussion of the varying potential of different AIDMSs to adhere to the proposed principles, showing that not all so-called black box (Guidotti et al., 2018) AIDMSs are equally opaque.

Philosophical investigations of the nature of explanations (e.g. Tuomela, 1980) were not examined, neither were the ways in which people generate explanations themselves (e.g. Hilton, 1990). Psychological research on explanation was also not examined. On the whole, psychology has focused on how people explain other people’s behavior (Malle, 2011), the conditions that induce people to make cause and effect inferences (e.g. Subbotsky, 2004), the extent to which people’s explanations of events are consistent with scientific ones (Kubricht, Holyoak, & Lu, 2017), and the formation of mental models about real-word events (Battaglia, Hamrick, & Tenenbaum, 2013). It is not that that this research is irrelevant to the issue, but it is just not as directly applicable as material that is specifically about AIDMS explanations and concerns.

Key Terms in this Chapter

Artificial Intelligence (AI): The subfield of computer science concerned with the simulation or creation of intelligent behavior in computers.

Neural Network (NN): In this chapter, it is a layered graph where each layer contains a set of nodes, the nodes of which are fully connected to those in the next layer, the first layer representing inputs and the last representing outputs or decisions. The graph encodes the statistical relationships between the inputs and outputs via machine-learning to generate outputs given only inputs.

AI Decision-Making System (AIDMS): An artificial intelligence software system that produces decisions relating to particular inputs.

Decision Tree (DT): A simple branching structure that organizes a sequence of decisions to reach final decisions. Decision trees can be learned via machine-learning.

User: In this chapter, user refers to both the people who use the AI decision-making system to generate decisions as well as the people about whom the decisions are made.

Machine Learning (ML): A set of AI techniques to develop computer systems that learn statistical regularities between inputs and outputs, thereby generating outputs from a set of inputs alone.

Automated Linear Regression: A machine-learning technique that learns a model of summed weights to predict outputs from inputs.

Recommender system: An AI system that recommends various products, services, articles, or social connections to a user based the user’s profile.

Complete Chapter List

Search this Book:
Reset