Decoding AI Decisions on Depth Map Analysis for Enhanced Interpretability

Decoding AI Decisions on Depth Map Analysis for Enhanced Interpretability

Krishnamurthy Oku, Laxmi Srinivas Samayamantri, Sangeeta Singhal, R. Steffi
Copyright: © 2024 |Pages: 22
DOI: 10.4018/979-8-3693-3739-4.ch008
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

This chapter delves into the innovative application of depth map analysis for enhancing the interpretability of artificial intelligence (AI) decision-making processes. The increasing complexity of AI algorithms, particularly deep learning ones, frequently results in a “black-box” situation, where the rationale behind decisions is non-transparent and challenging to decipher. To tackle this issue, the authors introduce a groundbreaking method that utilizes depth map analysis to clarify these processes, thereby making the decisions made by AI more transparent and understandable. The methodology is centred around generating depth maps from the layers of neural networks. These maps are subsequently scrutinized to gain insights into the significance of different features and the pathways leading to decisions. This strategy offers a dual advantage: it serves as a visual and a quantitative tool for interpreting AI decisions. Such an approach is pivotal in fostering AI systems that are not only more trustworthy but also reliable. The effectiveness of this method is showcased through a series of experiments, underscoring its applicability and potential benefits in various AI domains. These experiments demonstrate how depth map analysis can elucidate the internal workings of AI models, thereby making their decision-making processes more transparent and interpretable to users, researchers, and developers alike. This advancement holds significant promise for the future of AI, particularly in areas where understanding and trust in AI decision-making are crucial.
Chapter Preview

Complete Chapter List

Search this Book:
Reset