Trolls, Bots, and Whatnots: Deceptive Content, Deception Detection, and Deception Suppression

Trolls, Bots, and Whatnots: Deceptive Content, Deception Detection, and Deception Suppression

Anna Klyueva
DOI: 10.4018/978-1-5225-8535-0.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Trolls and bots are often used to alter, disrupt, or even silence legitimate online conversations artificially. Disrupting and corrupting the online civic engagement process creates ethical challenges and undermines social and political structures. Trolls and bots often amplify spurious deceptive content as their activity artificially inflates support for an issue or a public figure, thus creating mass misperception. In addressing this concern, the chapter examines how trolls (humans) and bots (robots that exhibit human-like communication behavior) affect online engagement that perpetuates deception, misinformation, and fake news. In doing so, the chapter reviews the literature on online trolling and chatbots to present a list of research-based recommendations for identifying (deception detection) and reacting (deception suppression) to trolls and bots.
Chapter Preview
Top

Introduction

Before all details of Santa Fe shooting in Texas became available to the public, the Internet was already abuzz with various information and misinformation about the shooter, the victims and the speculations of how and why. Less than 20 minutes after the shooter was named, a fake Facebook account was created in his name with images of him wearing a “Hillary 2016” hat (Harwell, 2018; see Figure 1). Similarly, coverage of the Parkland shooting in Florida was also interjected with deceptive content when a video portraying the survivors as “crisis actors” and a #CrisisActors hashtag began trending on social media (Snider, 2018; see Figure 2).

In their examination of Twitter in the aftermath of the Boston marathon bombing, Cassa, Chunara, Mandl and Brownstein (2013) found that social media play a vital role in the early detection and description of emergency situations. During crisis events, the demand for information often overwhelms its supply. People actively seek information and, in the absence of such, settle on anything they can get their hands on. Although a large volume of content is being posted on social media every single moment, not all information is of good quality, relevant or can reach the right audience. Technical affordances of social media contribute to the dissemination of deceptive content, primarily due to social influencers manufacturing public opinion and bots automatically sharing and retweeting information that has not been fact-checked or verified. Given that information on social media is being shared and accessed in real time, the effects of the deception reach can have unpredictable outcomes (Gupta, Lamba, Kumaraguru, & Joshi, 2013).

As such, contemporary social media ecology presents a plethora of social, political and economic incentives to develop software robots that can exhibit human-like communication behavior to facilitate information management (Ferrara, Varol, Davis, Menszer, & Flammini, 2016). At the same time, trolls and bots often amplify spurious deceptive content as their activity artificially inflates support for an issue or a public figure thus creating mass misperception. As Ferrara et al. (2016) explained, “The novel challenge brought by bots is the fact that they can give the false impression that some piece of information, regardless of its accuracy, is highly popular and endorsed by many, exerting an influence against which we haven’t yet developed antibodies” (p. 2). This is one of the biggest dangers both trolls and bots present as their ability to “engineer social tampering” (Ferrara et al., 2016, p. 2).

Figure 1.

Fake Facebook profile [Twitter screenshot from Chris Sampson @TAPSTRIMEDIA]

978-1-5225-8535-0.ch002.f01
Figure 2.

Trending #CrisisActors [Twitter screenshot from Jordan Sather @Jordan_Sather]

978-1-5225-8535-0.ch002.f02

In exploring this concern, this chapter examines how trolls (humans) and bots (robots that exhibit human-like communication behavior) affect online engagement that perpetuates deception, misinformation, and fake news. In so doing, the chapter explains what bots, trolls, and their characteristics are, and who and why is susceptible to trolls and bots on social media. The chapter also reviews the diverse literature on online trolling and chatbots and provides research-based recommendations for recognizing trolls and bots on social media and appropriately reacting to them.

Key Terms in this Chapter

Social Bots: A software robot designed to produce content automatically and interact with users while imitating real. Social bots successfully mimic human communication behavior online to build trust by posting links to paid content, gaining followers, replying, and reacting to interactions with the goal of promoting a product or agenda.

Promotional Reviewing: A practice by for-profit organizations who find manipulation of online opinions sufficiently economically consequential to pay for opinion manipulation trolls and fake online reviews. Also known as strategic manipulation.

Astroturfing: A technique used to generate large volumes of publicity with the purpose of influencing public opinion by appearing as an authentic grassroots movement. The term derives from “astroturf” – artificial green grass.

Trolls: An internet user engaged in the practice of posting provocative, often deliberately misleading and pointless, comments with the intent of provoking others into a conflict and/or meaningless discussion. There are paid trolls (see also opinion manipulations trolls) and non-paid trolls (mostly hooligans).

Machinic Bots: Machinic bots are software robots present on the internet and social media that are easily identifiable as such.

Bot-Assisted Person: A strategy used by social media personalities that allows them to focus on creating content while a social bot undertakes the groundwork by liking, commenting, following, and un-following users automatically with the purpose of growing their profile.

Botnet: A network of bots, programmed and operated under the single owner.

Twitter Bomb: A communication strategy on Twitter that involves repeating posting of the same content using the same hashtag, often from multiple accounts, with the purpose of promoting certain content, issue, or idea. Twitter bomb may result in high Google ranking of the promoted content.

Bots: An abbreviation of the term phrase “software robot.”

Opinion Manipulation Trolls: The internet trolls who are paid to participate in online conversations and discussions with the purpose of swaying public opinion or advocating for a specific issue.

Complete Chapter List

Search this Book:
Reset