Facebook and Google as Regrettable Necessities

Facebook and Google as Regrettable Necessities

Pietro Frigato, Francisco J. Santos-Arteaga
Copyright: © 2020 |Pages: 14
DOI: 10.4018/IJSDS.2020010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The following article considers the results from two different studies, a European one involving over 20,000 respondents and an American one closing on 1,000, to illustrate how online platforms such as Facebook and Google can be defined as regrettable necessities. We define regrettable necessities as those whose consumption provides a direct disutility to consumers. That is, other than the standard utility derived from the access to a given service, a direct disutility in terms of privacy losses and preference manipulation results from their use. In addition, users acknowledge this fact and are aware of the disutility suffered, though not necessarily of its intensity, highlighting the fundamental strategic role played by these platforms in current voting environments.
Article Preview
Top

1. Introduction

The capacity of online information providers to manipulate the preferences and decisions of Internet users has recently become a trendy topic given the social emphasis placed on fake news and the increasing interest in big data analysis (Schneier, 2018). This has been the case despite the fact that the Internet was initially considered by its most radical supporters as a free and frictionless information allocation mechanism matching perfectly suppliers with demanders (Golumbia, 2016). Its capacity to process enormous amounts of information and freely distribute it across users led the most optimistic of them to expect a virtually perfect exchange of information. However, it was the extraction of information from the users what became one of the main pillars of the resulting online market in such a way that “by the mid-2010s the average reader on news sites like Boston Globe’s bostom.com would be subjected to extraordinary surveillance methods, with only the barest degree of consent” Wu (2016, p. 321).

The online interactions taking place between information providers and Internet users has generated a substantial amount of empirical literature illustrating that “the rankings of search results provided by search engine companies have a dramatic impact on consumer attitudes, preferences, and behavior” (Epstein and Robertson 2015, p. E4512). These biases are seemly due to the trust with which users endow the companies in charge of the search engines to rank the results according to their subjective preferences. This is the case despite the fact that “users generally have no idea how results get ranked” (Epstein and Robertson 2015, p. E4512). The trust placed on an abstract algorithm – designed and updated by human engineers – applies also to Facebook, despite the decrease in satisfaction levels experienced by its users (Kourouthanassis et al., 2015).

One of the main consequences derived from the strategic process of information collection (and transmission) has been the emergence of the Facebook-Google duopoly of information providers, whose dominance over the market is expected to continue increasing

Google and Facebook are set to attract 84 per cent of global spending on digital advertising, excluding China, in 2017, underscoring concerns that the two technology companies have become a digital duopoly (Garrahan, 2017).

The use of Facebook and Google is so widespread and routinized that the data retrieved from their users is being used to generate increasingly accurate profiles of the population (Schneier, 2014). This fact is generally acknowledged by the users – particularly when dealing with privacy concerns (De Wolf et al., 2017) –, who nevertheless continue to use social network sites and search engines on a regular basis. In evolutionary theory, a routine arises whenever a given behavioral pattern is socially accepted among the population.1 In the current context, such a definition implies that whenever the use of an online platform becomes widespread and accepted as part of the standard behavior to follow, the costs arising in terms of privacy losses and potential manipulability are accepted and assimilated by the population. That is, users are willing to provide online platforms with the information required on a daily basis despite knowing that it can be exploited in a nontransparent way.

Among the theories proposed to justify such a behavior, the scopophilia approach of David Lyon (2006) has gained considerable momentum. The willingness to compete by displaying private information could be considered one of the main incentives driving users to share preference-related data in exchange for free access to the different products of online platforms. This feature links the behavior of users to the positional competition concept developed by Fred Hirsch (1977), where individuals compete within the social spectrum for increased, though marginal, recognition. Within such a framework, “potential customers are choosing to enter into these quasi-feudal user relationships because of the enormous value they receive from them” (Schneier 2014, p. 60), since the services provided by online platforms constitute “the tools of modern life”

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 3 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing