Creepy Technologies and the Privacy Issues of Invasive Technologies

Creepy Technologies and the Privacy Issues of Invasive Technologies

Rochell R. McWhorter (The University of Texas at Tyler, USA) and Elisabeth E. Bennett (Northeastern University, USA)
DOI: 10.4018/978-1-7998-2914-0.ch010


Technology has become increasingly invasive and corporate networks are expanding into public and private spaces to collect unprecedented data and provide new services such as artificial intelligence and through unsettling human-like personas. The term “creepy technology” is appearing in the literature along with concerns for privacy, ethical boundaries, cybersecurity, and mistaken identity but is also in news articles to inform the public about technology advances that affect consumer privacy. Invasive technology provides the impetus for external adaptation for many organizations and current trends require rapid adaption to potential threats to security. Also, usability addresses how users respond and adapt to new technology. This chapter includes the presentation of an exploratory study of how the public responded to various technology announcements (N=689 responses) and results indicated a significant response to invasive technologies and some sense of freedom to opine. This chapter also provides discussion of interventions that are critical to both public and private sectors.
Chapter Preview


Examples of creepy technology found in the literature are numerous (Martin, 2019; Purshouse, & Campbell, 2019; Vladimir, 2018; Wilkins, 2018). For instance, research about creepy technology includes studies at institutions of higher education that use facial recognition technology (FRT) to monitor students (see Cole, 2019; Cuador, 2017; Lieberman, 2018; Reidenberg, 2014) as well as examining trends into invasive technology (Aratani, 2019; Brown, 2019; Symanovich, 2018). Also, Wang and Kosinski’s (2018) controversial research attempted to predict sexual orientation by analyzing digital pictures and the researchers remarked that “given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women” (p. 246). Thus, such predictions could be harmful to people if they are identified or misidentified and subjected to discrimination, as well as discomfort about something so personal.

Key Terms in this Chapter

Location Sharing Applications: A mobile device application that allows users to share their location in real time.

Biometric Identification: Utilizing human behavioral and physical features for establishing identity.

Convergence of Big Data: The coming together of large data sets and machine learning.

Creepy Technology: Technology that evokes feeling or belief that privacy may be invaded in an unethical or discomforting manner.

Facial Recognition: Technology utilizing biometrics to recognize a human face.

AI Robot: A man-made machine that can reproduce some elements of human intellectual ability such as solving problems, memorizing facts, gathering information through sensors.

Digital Voice Assistant: Technology activated by speaking the name of the assistant to activate commands such as writing emails or messages, placing phone calls, reading content from various mediums (such as email, websites or messages), or turning on lights, music, or activating security cameras.

Artificial Intelligence: The capability of a computer to mimic human behavior or machine learning.

Complete Chapter List

Search this Book: