Introduction to XAI and Clinical Decision Support

Introduction to XAI and Clinical Decision Support

Thomas M. Connolly (DS Partnership, UK), Mario Soflano (Glasgow Caledonian University, UK), and Petros Papadopoulos (University of Strathclyde, UK)
DOI: 10.4018/978-1-6684-5092-5.ch002
OnDemand PDF Download:
No Current Special Offers


Artificial intelligence (AI) and machine learning (ML) offer significant opportunities in healthcare for innovation with its ability to solve cognitive problems normally requiring human intelligence. However, the potential of ML in healthcare has not been realised to date, with limited existing reports of the clinical and cost benefits that have arisen from their real-world in clinical practice. This is due to the lack of understanding about how some ML models operate and ultimately the way they come to make decisions. Explainable AI (XAI) has emerged as a response to this problem investigating methods and techniques that provide insights into the outcome of an ML model and present it in qualitative understandable terms or visualisations to the stakeholders of the model. This chapter introduces XAI and provides some examples of its use within healthcare.
Chapter Preview


Explainable AI aims to explain the way that AI systems work. At a high-level, we can distinguish between two types of models:

  • models that are inherently explainable - simple, transparent and easy to understand, sometimes referred to as white-box or transparent models;

  • models that are black-box in nature and require explanation through separate, replicating (surrogate) models that mimic the behaviour of the original model.

White-box systems include:

Complete Chapter List

Search this Book: