Article Preview
TopIntroduction
Algorithms are the core driving force behind the development of artificial intelligence, constituting a pivotal driving factor in contemporary enterprises’ digital management and the construction of intelligent societies (Shin, 2021a; Shin, Rasul, et al., 2022). Intelligent recommendation systems utilize artificial intelligence algorithms to analyze extensive user data, enabling the system to provide each user with products, services, or information tailored to their interests and needs. This represents a typical manifestation of algorithmic applications (Shin, Kee, et al., 2022). Intelligent recommendation services, formed through the application of algorithmic technology, have become a critical element in shaping the core competitiveness of internet platforms. Increasingly, businesses are adopting algorithms to optimize product and service design, enhance user experiences, and stand out in the intense landscape of commercial competition.
Intelligent recommendation systems, employing personal data mining to capture user demands, offer personalized services and have been extensively applied across various domains, such as social media, e-commerce, shopping, and news, deeply infiltrating multiple aspects of public life. Intelligent recommendation technology has transformed information retrieval, effectively alleviating the “information overload” phenomenon in the era of big data (Bobadilla et al., 2013). However, the widespread adoption of this technology has also brought numerous risks, drawing attention and concern due to adverse effects such as personal privacy breaches, information cocoons, internet addiction, algorithmic price discrimination, and excessive consumption induced by algorithmic recommendations (Han et al., 2023; Xu, 2022).
Although policies, regulations, and systems related to algorithm governance are gradually improving, these enhancements do not wholly alleviate individuals’ negative emotions toward algorithms. Survey results indicate variations in people’s acceptance of algorithmically recommended content (Smith, 2018). Personal negative emotions toward algorithms are not only related to objective algorithmic risks but are also significantly influenced by individuals’ subjective perceptions. Avoidance behavior is considered one of the fundamental responses of organisms to environmental stimuli (Gilbert et al., 1998; Schneirla, 1959). Users’ information avoidance behavior helps reduce individual cognitive burdens and alleviate adverse emotions arising from frequent algorithmic recommendations (Case et al., 2005). It is seen as an effective means of risk coping and emotion alleviation (Zhao & Liu, 2021).
For businesses, achieving digital transformation requires addressing not only the capital and technological barriers to innovation but also the “user barriers” that arise after algorithm implementation. Gaining a deeper understanding of the reasons and impacts of individuals’ negative emotions toward algorithms can help businesses better overcome these barriers and fully leverage the positive effects of algorithms. Researching questions such as “Why do people engage in algorithmic avoidance?” and “Under what conditions is users’ algorithmic avoidance strengthened or weakened?” can assist businesses in understanding obstacles to organizational development posed by algorithms. It can also aid national governments in formulating new guidelines for advancing digitalization and promoting algorithmic risk prevention and governance. Consequently, this study aims to explore the motivations influencing users’ algorithmic avoidance behavior in intelligent recommendation systems, with the expectation of fostering positive interaction between users and technology and providing guidance for the effective development of enterprises using these systems.