Insights Into Incorporating Trustworthiness and Ethics in AI Systems With Explainable AI

Insights Into Incorporating Trustworthiness and Ethics in AI Systems With Explainable AI

Meghana Kshirsagar, Krishn Kumar Gupt, Gauri Vaidya, Conor Ryan, Joseph P. Sullivan, Vivek Kshirsagar
Copyright: © 2022 |Pages: 23
DOI: 10.4018/IJNCR.310006
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Over the past seven decades since the advent of artificial intelligence (AI) technology, researchers have demonstrated and deployed systems incorporating AI in various domains. The absence of model explainability in critical systems such as medical AI and credit risk assessment among others has led to neglect of key ethical and professional principles which can cause considerable harm. With explainability methods, developers can check their models beyond mere performance and identify errors. This leads to increased efficiency in time and reduces development costs. The article summarizes that steering the traditional AI systems toward responsible AI engineering can address concerns raised in the deployment of AI systems and mitigate them by incorporating explainable AI methods. Finally, the article concludes with the societal benefits of the futuristic AI systems and the market shares for revenue generation possible through the deployment of trustworthy and ethical AI systems.
Article Preview
Top

Literature Methodology

Our literature review is built by analyzing information across a range of sources. Figure 1 illustrates the pipeline of the proposed study where we integrate data based on information from the past, the current, and the predicted future for AI-powered applications. The objective of the study is to discuss the impact of AI-powered products on the wider community.

Figure 1.

Pipeline of the proposed study

IJNCR.310006.f01

We present the evolution of AI, the Machine Learning (ML) algorithms in practice, applications in use, and the future of businesses and technologies with the integration of AI and its determinants like trust, big data and ubiquitous computing. We conducted a detailed study of all the ML and Deep Learning (DL) algorithms along with their use cases. We have an in-depth discussion on how incorporating explainability and interpretability into AI applications can lead to robust, trustworthy, fair and transparent AI systems. Finally, we bring to attention the importance of responsible AI engineering leading to regulated and accountable AI systems of the future.

The unique contributions of our proposed study are:

  • 1.

    The evolution of AI and deep learning technology over the past seven decades;

  • 2.

    Popular ML algorithms along with use cases drawn from diverse application domains;

  • 3.

    Intelligent business models and market trends for industrial AI-powered products;

  • 4.

    Incorporating Responsible AI for Trustworthy AI systems.

Top

Background

This section is split across a number of subsections starting with the roadmap of seven decades in AI, followed by a brief review on ML/DL algorithms, and concluding with diverse application domains of AI applications.

Complete Article List

Search this Journal:
Reset
Volume 12: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 11: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 2 Issues (2017)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing