A Survey on Explainability in Artificial Intelligence

A Survey on Explainability in Artificial Intelligence

Prarthana Dutta, Naresh Babu Muppalaneni, Ripon Patgiri
DOI: 10.4018/978-1-7998-7685-4.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The world has been evolving with new technologies and advances everyday. With learning technologies, the research community can provide solutions in every aspect of life. However, it is found to lag behind the ability to explain its prediction. The current situation is such that these modern technologies can predict and decide upon various cases more accurately and speedily than a human, but has failed to provide an answer when the question of “how” it arrived at such a prediction or “why” one must trust its prediction, is put forward. To attain a deeper understanding of this rising trend, the authors surveyed a very recent and talked-about novel contribution, “explainability,” which would provide rich insight on a prediction being made by a model. The central premise of this chapter is to provide an overview of studies explored in the domain and obtain an idea of the current scenario along with the advancements achieved to date in this field. This survey aims to provide a comprehensive background of the broad spectrum of “explainability.”
Chapter Preview
Top

Background

The term “eXplainable Artificial Intelligence” is usually abbreviated as XAI. The term was first formulated by Lent et al. in 2004 (Van Lent et al., 2004). Before that, it was addressed simply as a “black-box.”

Complete Chapter List

Search this Book:
Reset