Article Preview
TopIntroduction
Human activities are structured and facilitated by sophisticated communication, information, and technology infrastructures. These advances are reshaping our modes of communication, business operations, and information exchange among friends, families, organizations, and global communities. Artificial intelligence (AI) stands out as a pervasive modern technology holding the vast potential to profoundly transform interactions, lifestyles, and professional engagements (Fogli & Tetteroo, 2022; Jang, 2023; Ku & Chen, 2024; Smilansky, 2017; Wang et al., 2023). However, the immense power of AI technology with access to large and rich data has created concerns about privacy and anonymity.
According to Madakam et al. (2015), AI is a computer-generated technology that uses natural algorithms to perform tasks that may require human intelligence. Recent literature on AI and the future of data protection and privacy has received significant interest within academia and industrial fields (Wu et al., 2019). For example, researchers have probed the implications for the right of privacy related to the following scenarios: the use of AI technologies to administer justice in courts (Fiechuk, 2019), application of AI in public sector management (Maragno et al., 2023), and integration of AI into smart meter technology (Lodder & Wisman, 2015). Collins et al. (2021) conducted a systematic literature review on AI in information systems research. In addition, Wang et al. (2023) proposed the concept of AI literacy by developing a quantitative scale for measuring the accuracy of AI literacy. They concluded that not only will the scale create an understanding of user competency with AI technology, but it will also assist designers with developing AI applications that will align with the level of AI literacy of target users. Johnson and Verdicchio (2017) explored public anxiety surrounding artificial intelligence, particularly concerning privacy. Their research identified a primary driver of this apprehension: a significant fear that AI systems could soon operate beyond the limits of human control. Also, as pointed out in Johnson and Verdicchio’s work, the focus on AI programs without human involvement is flawed because, regardless of advances in AI and superintelligence, there will be a need for human engagement to a considerable extent with such programs. Furthermore, Johnson and Verdicchio argue that the next form of anxiety will emanate from the concept of autonomy, which means the extent to which AI programs would be fully autonomous to make decisions without human intervention. Johnson and Verdicchio(2017) identified that leaving computers to make such decisions without human involvement could impact privacy issues. Importantly, Johnson and Verdicchio (2017) differentiated between computational and human autonomy. Humans have the power to make decisions based on fundamental rights and are well-equipped to understand the context of human decision-making. On the other hand, computational autonomy may not be able to make such judgements in the event of changes in context. Thus, computational autonomy seems to conflict with human autonomy in data protection and privacy. Researchers in AI, such as Müller (2016), have asserted that in a futuristic scenario, robots could behave like humans and may harm others to achieve their objectives.