Leveraging Explainable AI for Cybersecurity

Leveraging Explainable AI for Cybersecurity

Nasim Nezhadsistani (Communication Systems Group, Department of Informatics, University of Zürich, Switzerland) and Burkhard Stiller (Communication Systems Group, Department of Informatics, University of Zürich, Switzerland)
DOI: 10.4018/979-8-3373-2200-1.ch009
OnDemand:
(Individual Chapters)
Forthcoming
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In recent years, cyber threats have grown exponentially. As organizations increasingly rely on digital infrastructures, robust cybersecurity solutions are increasingly necessary. The use of artificial intelligence (AI) has emerged as a promising tool for enhancing security, as it is capable of detecting threats in real-time, identifying anomalies, and automating response to incidents. While these AI systems are often referred to as “black boxes,” they produce decisions that are difficult for cybersecurity analysts to interpret. Explainable AI (XAI) aims to address this issue by providing transparent and interpretable insights into how AI models arrive at their outputs. As a result of the combination of powerful AI-driven analytics combined with clear explanations, cybersecurity professionals can more confidently rely on machine learning models to detect threats. This chapter examines how Explainable AI can bolster cybersecurity, highlighting its core principles, defense applications, ethical concerns, and technical constraints.
Chapter Preview

Complete Chapter List

Search this Book:
Reset