Article Preview
TopIntroduction
Due to black swan events in the context of global health crises (Chen et al., 2021), organizations are increasingly integrating artificial intelligence (AI) into their operations (Dwivedi et al., 2021). During a crisis the most critical goal of most organizational decisions is to effectively utilize scarce resources and improve performance (Johnson et al., 2022). With its ability to process and analyze a large volume of data, quicker than a human brain can, AI helps determine possible consequences of actions and streamlines the decision-making process (Harfouche et al., 2022).
Many AI projects have been considered failures. For example, in 2016, the chatbot Tay was introduced by Microsoft with the promise of an “AI with zero chill,” but it quickly began to make racist and derogatory remarks in response to aggressive Twitter users. On March 18th, 2018, Elaine Herzberg paid with her life due to an AI failure (Smith, 2018). She was fatally struck by an automated Uber test vehicle while pushing a bicycle across a four-lane road in Arizona. Many researchers have cautioned that some of these AI failures are related to the development of biased algorithms (see, e.g., Akter et al., 2022; Johnson et al., 2022; Martin, 2018; Mittelstadt et al., 2016; Ziewitz, 2015). Bias in AI can occur during data collection, AI design, training of the algorithm, and interpretation of outputs, as well as after deployment and use (Harfouche et al., 2023).
Artificial intelligence is mainly used in situations that require capturing vast amounts of data according to Akter et al. (2022), and it exhibits characteristics of human intelligence (Huang & Rust, 2021) through learning from external data (Kaplan & Haenlein, 2019). Collins et al. (2021) consider that there is an urgent need to define AI to help policymakers better identify potential threats and opportunities and orient research toward the needed frameworks. They call for an increase in the number of rigorous AI academic studies, a better and more detailed definition of AI in information systems (IS) studies, and an installation of a general process of cumulative knowledge. In this paper, we adopt the definition from Rai et al. (2019) that considers AI as “the ability of a machine to perform cognitive functions that can be associated with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem-solving, decision-making, and even demonstrating creativity” (p. iii). We will consider AI as several machine learning (ML) algorithms that build a model of rules or links learned from training data (Harfouche et al., 2019). Artificial intelligence can learn from data by automatically identifying hidden patterns and building decision-making models. Most data, however, are biased (Akter et al., 2022). Naturally, ML also reflects the bias inherent in the data itself. Machine learning models can replicate and sometimes exacerbate existing biases (Harfouche et al., 2023).
We examined the following research question: How can human-centric AI mitigate AI biases and contribute to the advent of augmented intelligence?
If the challenges of past decades were associated with social phenomena of knowledge transfer and knowledge creation, the main challenge today is related to human-computer interaction, and more specifically, how to combine the abilities and knowledge of human beings with various AI algorithms. A key sustainability challenge in artificial intelligence is the need for more collaborative, transdisciplinary, and robust scientific involvement in the design of AI architecture, training of AI agents, explanations about hypothesis validation, and continuous usage of AI.