Artificial Intelligence (AI) and the Future of Information Privacy: Expert Viewpoints

Artificial Intelligence (AI) and the Future of Information Privacy: Expert Viewpoints

Alfred Akakpo (University of Northampton, UK), Evans Akwasi Gyasi (Anglia Ruskin University, UK), Bentil Oduro (Coventry University, UK), and Sunny Akpabot (Coventry University, UK)
Copyright: © 2025 |Pages: 25
DOI: 10.4018/JGIM.383050
Article PDF Download
Open access articles are freely available for download

Abstract

AI's expanding role in daily life offers data-driven decision-making but risks privacy breaches. This study, using the Communication Privacy Management theory, explores AI adoption's future privacy implications. Through thematic analysis of 42 AI expert interviews, we identify six key themes: human agency, data use/abuse, AI transparency/opacity, information weaponization, cyberbullying, and privacy enforcement. These themes contextualize the evolving AI-privacy relationship. We argue that AI's advancement will challenge traditional privacy ownership concepts. This research provides insights into navigating the complex interplay between AI's growth and safeguarding information privacy
Article Preview
Top

Introduction

Human activities are structured and facilitated by sophisticated communication, information, and technology infrastructures. These advances are reshaping our modes of communication, business operations, and information exchange among friends, families, organizations, and global communities. Artificial intelligence (AI) stands out as a pervasive modern technology holding the vast potential to profoundly transform interactions, lifestyles, and professional engagements (Fogli & Tetteroo, 2022; Jang, 2023; Ku & Chen, 2024; Smilansky, 2017; Wang et al., 2023). However, the immense power of AI technology with access to large and rich data has created concerns about privacy and anonymity.

According to Madakam et al. (2015), AI is a computer-generated technology that uses natural algorithms to perform tasks that may require human intelligence. Recent literature on AI and the future of data protection and privacy has received significant interest within academia and industrial fields (Wu et al., 2019). For example, researchers have probed the implications for the right of privacy related to the following scenarios: the use of AI technologies to administer justice in courts (Fiechuk, 2019), application of AI in public sector management (Maragno et al., 2023), and integration of AI into smart meter technology (Lodder & Wisman, 2015). Collins et al. (2021) conducted a systematic literature review on AI in information systems research. In addition, Wang et al. (2023) proposed the concept of AI literacy by developing a quantitative scale for measuring the accuracy of AI literacy. They concluded that not only will the scale create an understanding of user competency with AI technology, but it will also assist designers with developing AI applications that will align with the level of AI literacy of target users. Johnson and Verdicchio (2017) explored public anxiety surrounding artificial intelligence, particularly concerning privacy. Their research identified a primary driver of this apprehension: a significant fear that AI systems could soon operate beyond the limits of human control. Also, as pointed out in Johnson and Verdicchio’s work, the focus on AI programs without human involvement is flawed because, regardless of advances in AI and superintelligence, there will be a need for human engagement to a considerable extent with such programs. Furthermore, Johnson and Verdicchio argue that the next form of anxiety will emanate from the concept of autonomy, which means the extent to which AI programs would be fully autonomous to make decisions without human intervention. Johnson and Verdicchio(2017) identified that leaving computers to make such decisions without human involvement could impact privacy issues. Importantly, Johnson and Verdicchio (2017) differentiated between computational and human autonomy. Humans have the power to make decisions based on fundamental rights and are well-equipped to understand the context of human decision-making. On the other hand, computational autonomy may not be able to make such judgements in the event of changes in context. Thus, computational autonomy seems to conflict with human autonomy in data protection and privacy. Researchers in AI, such as Müller (2016), have asserted that in a futuristic scenario, robots could behave like humans and may harm others to achieve their objectives.

Complete Article List

Search this Journal:
Reset
Volume 33: 1 Issue (2025)
Volume 32: 1 Issue (2024)
Volume 31: 9 Issues (2023)
Volume 30: 12 Issues (2022)
Volume 29: 6 Issues (2021)
Volume 28: 4 Issues (2020)
Volume 27: 4 Issues (2019)
Volume 26: 4 Issues (2018)
Volume 25: 4 Issues (2017)
Volume 24: 4 Issues (2016)
Volume 23: 4 Issues (2015)
Volume 22: 4 Issues (2014)
Volume 21: 4 Issues (2013)
Volume 20: 4 Issues (2012)
Volume 19: 4 Issues (2011)
Volume 18: 4 Issues (2010)
Volume 17: 4 Issues (2009)
Volume 16: 4 Issues (2008)
Volume 15: 4 Issues (2007)
Volume 14: 4 Issues (2006)
Volume 13: 4 Issues (2005)
Volume 12: 4 Issues (2004)
Volume 11: 4 Issues (2003)
Volume 10: 4 Issues (2002)
Volume 9: 4 Issues (2001)
Volume 8: 4 Issues (2000)
Volume 7: 4 Issues (1999)
Volume 6: 4 Issues (1998)
Volume 5: 4 Issues (1997)
Volume 4: 4 Issues (1996)
Volume 3: 4 Issues (1995)
Volume 2: 4 Issues (1994)
Volume 1: 4 Issues (1993)
View Complete Journal Contents Listing