Human-Centric AI to Mitigate AI Biases: The Advent of Augmented Intelligence

Human-Centric AI to Mitigate AI Biases: The Advent of Augmented Intelligence

Antoine Harfouche (University Paris Nanterre, France), Bernard Quinio (University Paris Nanterre, France), and Francesca Bugiotti (Paris-Saclay, CNRS, LISN, CentraleSupélec, France)
Copyright: © 2023 |Pages: 23
DOI: 10.4018/JGIM.331755
Article PDF Download
Open access articles are freely available for download

Abstract

The global health crisis represents an unprecedented opportunity for the development of artificial intelligence (AI) solutions. This article aims to tackle part of the biases in artificial intelligence by implementing a human-centric AI to help decision-makers in organizations. It relies on the results of two design science research (DSR) projects: SCHOPPER and VRAILEXIA. These two design projects operationalize the human-centric AI approach with two complementary stages: 1) the first installs a human-in-loop informed design process, and 2) the second implements a usage architecture that aggregates AI and humans. The proposed framework offers many advantages such as permitting to integrate of human knowledge into the design and training of the AI, providing humans with an understandable explanation of their predictions, and driving the advent of augmented intelligence that can turn algorithms into a powerful counterweight to human decision-making errors and humans as a counterweight to AI biases.
Article Preview
Top

Introduction

Due to black swan events in the context of global health crises (Chen et al., 2021), organizations are increasingly integrating artificial intelligence (AI) into their operations (Dwivedi et al., 2021). During a crisis the most critical goal of most organizational decisions is to effectively utilize scarce resources and improve performance (Johnson et al., 2022). With its ability to process and analyze a large volume of data, quicker than a human brain can, AI helps determine possible consequences of actions and streamlines the decision-making process (Harfouche et al., 2022).

Many AI projects have been considered failures. For example, in 2016, the chatbot Tay was introduced by Microsoft with the promise of an “AI with zero chill,” but it quickly began to make racist and derogatory remarks in response to aggressive Twitter users. On March 18th, 2018, Elaine Herzberg paid with her life due to an AI failure (Smith, 2018). She was fatally struck by an automated Uber test vehicle while pushing a bicycle across a four-lane road in Arizona. Many researchers have cautioned that some of these AI failures are related to the development of biased algorithms (see, e.g., Akter et al., 2022; Johnson et al., 2022; Martin, 2018; Mittelstadt et al., 2016; Ziewitz, 2015). Bias in AI can occur during data collection, AI design, training of the algorithm, and interpretation of outputs, as well as after deployment and use (Harfouche et al., 2023).

Artificial intelligence is mainly used in situations that require capturing vast amounts of data according to Akter et al. (2022), and it exhibits characteristics of human intelligence (Huang & Rust, 2021) through learning from external data (Kaplan & Haenlein, 2019). Collins et al. (2021) consider that there is an urgent need to define AI to help policymakers better identify potential threats and opportunities and orient research toward the needed frameworks. They call for an increase in the number of rigorous AI academic studies, a better and more detailed definition of AI in information systems (IS) studies, and an installation of a general process of cumulative knowledge. In this paper, we adopt the definition from Rai et al. (2019) that considers AI as “the ability of a machine to perform cognitive functions that can be associated with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem-solving, decision-making, and even demonstrating creativity” (p. iii). We will consider AI as several machine learning (ML) algorithms that build a model of rules or links learned from training data (Harfouche et al., 2019). Artificial intelligence can learn from data by automatically identifying hidden patterns and building decision-making models. Most data, however, are biased (Akter et al., 2022). Naturally, ML also reflects the bias inherent in the data itself. Machine learning models can replicate and sometimes exacerbate existing biases (Harfouche et al., 2023).

We examined the following research question: How can human-centric AI mitigate AI biases and contribute to the advent of augmented intelligence?

If the challenges of past decades were associated with social phenomena of knowledge transfer and knowledge creation, the main challenge today is related to human-computer interaction, and more specifically, how to combine the abilities and knowledge of human beings with various AI algorithms. A key sustainability challenge in artificial intelligence is the need for more collaborative, transdisciplinary, and robust scientific involvement in the design of AI architecture, training of AI agents, explanations about hypothesis validation, and continuous usage of AI.

Complete Article List

Search this Journal:
Reset
Volume 33: 1 Issue (2025)
Volume 32: 1 Issue (2024)
Volume 31: 9 Issues (2023)
Volume 30: 12 Issues (2022)
Volume 29: 6 Issues (2021)
Volume 28: 4 Issues (2020)
Volume 27: 4 Issues (2019)
Volume 26: 4 Issues (2018)
Volume 25: 4 Issues (2017)
Volume 24: 4 Issues (2016)
Volume 23: 4 Issues (2015)
Volume 22: 4 Issues (2014)
Volume 21: 4 Issues (2013)
Volume 20: 4 Issues (2012)
Volume 19: 4 Issues (2011)
Volume 18: 4 Issues (2010)
Volume 17: 4 Issues (2009)
Volume 16: 4 Issues (2008)
Volume 15: 4 Issues (2007)
Volume 14: 4 Issues (2006)
Volume 13: 4 Issues (2005)
Volume 12: 4 Issues (2004)
Volume 11: 4 Issues (2003)
Volume 10: 4 Issues (2002)
Volume 9: 4 Issues (2001)
Volume 8: 4 Issues (2000)
Volume 7: 4 Issues (1999)
Volume 6: 4 Issues (1998)
Volume 5: 4 Issues (1997)
Volume 4: 4 Issues (1996)
Volume 3: 4 Issues (1995)
Volume 2: 4 Issues (1994)
Volume 1: 4 Issues (1993)
View Complete Journal Contents Listing