Explainable Artificial Intelligence

Explainable Artificial Intelligence

Vanessa Keppeler, Matthias Lederer, Ulli Alexander Leucht
Copyright: © 2023 |Pages: 18
DOI: 10.4018/978-1-7998-9220-5.ch100
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The explainability of artificial intelligence (AI) is one of the central challenges for the wider use of the new technology in many industries and applications. The more powerful and efficient the algorithms of AI work, the less it is usually comprehensible to users. While there is widespread agreement on the basic requirement of explainability for AI applications, the design of an adequate AI explanation is rarely defined. This contribution presents basic concepts of explainability as well as current approaches to explanations for AI. It describes which methods are fundamentally suitable for considering an explanation to be complete and how it must be designed in order to be assessed as interpretable for AI.
Chapter Preview
Top

Introduction

Artificial intelligence (AI) can be used in almost all areas of a modern digital enterprise. While the very first AI systems were easy to interpret, increasingly opaque decision-making systems have emerged in recent years (Arrieta et al., 2019). This is largely due to the fact that their tremendous progress in performance has made them increasingly complex, making it difficult to understand how they come to a decision or outcome (Biran et al., 2017). AI systems are therefore often referred to as a 'black box' (Bauer et al., 2021). These complex and non-transparent models present a significant challenge to many companies when it comes to assuming responsibility and ensuring the traceability of decisions.

The explainability of AI thus represents one of the central barriers for the comprehensive use of the new technology. While there is widespread agreement on the basic requirement of explainability, the design of an appropriate explanation is rarely well defined and the definition of what 'explainable' means is less clear. There are different ways to formulate explanations, but it is not defined which formulation is considered appropriate to make AI explainable (Gilpin et al., 2019).

The comprehensibility and explainability of AI systems and their results is a basic prerequisite for the use and acceptance of the technology in many companies (Manikonda et al., 2020). In the research field of 'Explainable AI', the generation of explanations and the establishment of comprehensibility for AI systems is being researched intensively (Bauer et al. 2021). This includes all decisions being prepared or performed by highly complex AI models (Arya et al., 2019).

This contribution shows that there are numerous approaches for explanations, with varying relevance for different interest groups. Furthermore, the quality of explanations for artificial intelligence is described in more detail and evaluated with respect to completeness of an explanation as well as its interpretability.

Key Terms in this Chapter

Explanation: Information to describe the cause of a state of affairs by formulating its logical and causal relations.

Explainability: Degree to which an observer or user can understand the cause of a decision by formulating an explanation.

Artificial Intelligence: Ability of a machine to perform cognitive functions that we associate with the human mind.

Interpretability: Characteristic of a good explanation to provide a fact as comprehensible as possible for humans (Doshi-Velez et al., 2017 AU59: The in-text citation "Doshi-Velez et al., 2017" is not in the reference list. Please correct the citation, add the reference to the list, or delete the citation. ).

Explainable AI: Explanatory agent revealing underlying causes to its or another agent's decision making.

Completeness: Characteristic of a good explanation that aims to describe a fact as precisely and accurately as possible ( Gilpin et al., 2019 AU58: The citation "Gilpin et al., 2019" matches multiple references. Please add letters (e.g. "Smith 2000a"), or additional authors to the citation, to uniquely match references and citations. ).

Back Box: Component of a system, of which only the external behavior is known, but not the content.

Complete Chapter List

Search this Book:
Reset