Addressing Algorithmic Bias in AI-Driven Customer Management

Addressing Algorithmic Bias in AI-Driven Customer Management

Shahriar Akter, Yogesh K. Dwivedi, Kumar Biswas, Katina Michael, Ruwan J. Bandara, Shahriar Sajib
Copyright: © 2021 |Pages: 27
DOI: 10.4018/JGIM.20211101.oa3
Article PDF Download
Open access articles are freely available for download


Research on AI has gained momentum in recent years. Many scholars and practitioners increasingly highlight the dark sides of AI, particularly related to algorithm bias. This study elucidates situations in which AI-enabled analytics systems make biased decisions against customers based on gender, race, religion, age, nationality or socioeconomic status. Based on a systematic literature review, this research proposes two approaches (i.e., a priori and post-hoc) to overcome such biases in customer management. As part of a priori approach, the findings suggest scientific, application, stakeholder and assurance consistencies. With regard to the post-hoc approach, the findings recommend six steps: bias identification, review of extant findings, selection of the right variables, responsible and ethical model development, data analysis and action on insights. Overall, this study contributes to the ethical and responsible use of AI applications.
Article Preview

1. Introduction

The world is witnessing groundbreaking changes emerging from the application of artificial intelligence (AI). AI has revolutionized many sectors, including healthcare, education, retail, finance, insurance, and law enforcement and becoming increasingly adopted due to its ability to perform complex tasks which are comparable to humans. It is expected that companies will spend around $98 billion on AI in 2023 globally (International Data Corporation, 2019). This makes sense as AI solves critical business issues helping organizations to become more efficient, gaining competitive advantage while also saving on operational costs (Davenport & Ronanki, 2018; Oana, Cosmin, & Valentin, 2017; Rai, 2020). However, the use of AI is not without limitations.

With the increasing popularity of automating and enhancing business processes with AI, many scholars and practitioners have voiced their concerns regarding the dark sides of AI. Especially concerns over fairness and algorithm bias have increased (Wang, Harper, & Zhu, 2020). Algorithm bias occurs when AI produces systematically unfair outcomes that can arbitrarily put a particular individual or group at an advantage or disadvantage over another (Gupta & Krishnan, 2020; Sen, Dasgupta, & Gupta, 2020). This is an outcome occurring mainly from working with unrepresentative datasets or issues in algorithm design and particularly affects underrepresented minority groups (Gupta & Krishnan, 2020; Mullainathan & Obermeyer, 2017; Obermeyer, Powers, Vogeli, & Mullainathan, 2019). Recently there were many cases that showcased gender, racial and socio-economic biases emanating from AI applications. Some of these include several facial recognition systems, for example, Amazon’s AI-based “Rekognition” software, discriminating against darker-skinned individuals and also providing unreliable results in identifying females; Google's AI hate speech detector was found providing racially biased outcomes; Google was showing fewer ads to females compared to males in the recruitment of high paying jobs; Amazon also abandoned an algorithmic human resources recruitment system for reviewing and ranking applicants’ resumes since it was biased against women; a racial bias in a medical algorithm developed by Optum was found to favor white patients over sicker black patients; and the robodebt scheme in Australia wrongly and unlawfully pursued hundreds of thousands of welfare clients for the debt they did not owe (Blier, 2019; Hunter, 2020; Johnson, 2019; Martin, 2019).

The impact of algorithm bias can be devastating, asymmetric and oppressive, with individuals discriminated against and businesses negatively impacted. Despite the increasing understanding of algorithm bias and its effects, overall research in this stream lacks a systematic discussion of how it can affect service systems and how we can address algorithm-bias in data-driven decision making. Therefore, this paper responds to the question: ‘how to address algorithm bias in AI-driven customer management?’ The main objectives of the current study are: 1) to review and analyze the algorithm bias in customer management; 2) to synthesize the systematic literature review findings into a decision-making framework, and 3) to provide future research directions as per the knowledge gap. The systematic literature review in the emerging topic of algorithm bias contributes to AI literature mainly by providing a clear picture of the determinants of algorithm bias and its effects on customer management. Also, this study uniquely contributes to the theory by presenting a theoretical framework that identifies four consistency measures and six post-hoc measures to address algorithm bias in customer management. Further, this study is important as it contributes to the debate of responsible innovation and ethical AI (Ghallab, 2019; Gupta and Krishnan, 2020; Rakova et al. 2020) by scrutinizing the key ethical challenge of algorithm bias in AI applications.

Complete Article List

Search this Journal:
Volume 32: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 31: 9 Issues (2023)
Volume 30: 12 Issues (2022)
Volume 29: 6 Issues (2021)
Volume 28: 4 Issues (2020)
Volume 27: 4 Issues (2019)
Volume 26: 4 Issues (2018)
Volume 25: 4 Issues (2017)
Volume 24: 4 Issues (2016)
Volume 23: 4 Issues (2015)
Volume 22: 4 Issues (2014)
Volume 21: 4 Issues (2013)
Volume 20: 4 Issues (2012)
Volume 19: 4 Issues (2011)
Volume 18: 4 Issues (2010)
Volume 17: 4 Issues (2009)
Volume 16: 4 Issues (2008)
Volume 15: 4 Issues (2007)
Volume 14: 4 Issues (2006)
Volume 13: 4 Issues (2005)
Volume 12: 4 Issues (2004)
Volume 11: 4 Issues (2003)
Volume 10: 4 Issues (2002)
Volume 9: 4 Issues (2001)
Volume 8: 4 Issues (2000)
Volume 7: 4 Issues (1999)
Volume 6: 4 Issues (1998)
Volume 5: 4 Issues (1997)
Volume 4: 4 Issues (1996)
Volume 3: 4 Issues (1995)
Volume 2: 4 Issues (1994)
Volume 1: 4 Issues (1993)
View Complete Journal Contents Listing