Finding Automated (Bot, Sensor) or Semi-Automated (Cyborg) Social Media Accounts Using Network Analysis and NodeXL Basic

Finding Automated (Bot, Sensor) or Semi-Automated (Cyborg) Social Media Accounts Using Network Analysis and NodeXL Basic

Copyright: © 2020 |Pages: 40
DOI: 10.4018/978-1-7998-1754-3.ch060
(Individual Chapters)
No Current Special Offers


Various research findings suggest that humans often mistake social robot (‘bot) accounts for human in a microblogging context. The core research question here asks whether the use of social network analysis may help identify whether a social media account is fully automated, semi-automated, or fully human (embodied personhood)—in the contexts of Twitter and Wikipedia. Three hypotheses are considered: that automated social media account networks will have less diversity and less heterophily; that automated social media accounts will tend to have a botnet social structure, and that cyborg accounts will have select features of human- and robot- social media accounts. The findings suggest limited ability to differentiate the levels of automation in a social media account based solely on social network analysis alone in the face of a determined and semi-sophisticated adversary given the ease of network account sock-puppetry but does suggest some effective detection approaches in combination with other information streams.
Chapter Preview


With the popularization of social media, masses of people have gone online to interact, collaborate, and share; in their midst, non-human social media accounts have likewise proliferated. These automated and semi-automated accounts include social bots, sensor-based accounts, and human-assisting robots as well as cyborg accounts (human-assisted machine accounts; machine-assisted human accounts). When people interact on social media platforms, whether they are microblogging, gaming, sharing photos, or making connections with others, many generally assume that they are interacting with other people. On a number of social media platforms today, there are accounts that are masks for algorithmic actors—robots and sensors, as well as cyborgs (accounts representing human-assisted robots or robot-assisted humans).

The online social ecology benefits from many of the efficiencies and tasks of automated agents, whether their presence is noticed and identified or not. On microblogging sites, robots (‘bots) are used to send out critical messaging alerts about traffic, weather, environmental hazards, and missing children. On Wikipedia, vetted robots contribute to the coherence and functioning of the site by “injecting public domain data, monitoring and curating content, augmenting the MediaWiki software, and protecting the encyclopedia from malicious activity” (Halfaker & Riedl, 2012, p. 80). The Wikimedia Foundation Inc. also runs their own scripts to control against page vandalism and vulgarity; it has human staff to control against public relations manipulations and personal attacks (Safer, Apr. 5, 2015). Rambot, the encyclopedia’s first officially sanctioned robot, inserted census data into articles about countries and cities. Robots traverse virtually all social media platforms in order to collect information and provide services.

There are high hopes expressed in the research literature for the services that robots may provide to humans. One research team identified a range of early and functional chatterbot roles: digital assistant, information provider, and general chat (Tatai, Csordás, Kiss, Szaló, & Laufer, 2003, p. 9). Socialbots may be deployed in swarms across social media to mend hard feelings and promote social comity:

Swarms of ‘bots could be used to heal broken connections between infighting social groups and bridge existing social gaps. Socialbots could be deployed to leverage peer effects to promote more civic engagement and participation in elections (Hwang, Pearce, & Nanis, 2012, p. 40).

Social robots may be disarming; they may sooth communal feelings; they may mediate. Sensor networks, many with data flowing on social media platforms, are seen as potentially providing cost-savings and other efficiencies for government services (Ylagan, 2014). For examples, sensors may inform people of traffic congestion on the public roads; they may inform about hazardous chemicals or bioagents in the environment; they may enhance security awareness at critical sites..and broadcast select and relevant information broadly.

On the other hand, there have also been disruptive automated agents that have been deployed on social media platforms to dupe human users into revealing sensitive information to compromise their finances, protected information, and personal security. They autogenerate and distribute spam broadly to sell a variety of goods and services, to promote misinformation, and to manage (false) impressions. Some are used to “socially engineer” people into revealing sensitive information with risk implications for companies and even to national security. In one study, researchers created algorithmic agents to present as friends of friends on Facebook in order to access targeted individual’s private information in order to launch attacks against corporate employers of those targeted individuals (Elyashar, Fire, Kagan, & Elovici, 2013); their conclusion was that they were able to access 50 – 70% of the queried employees’ private data in the targeted organizations even though these individuals. These researchers show that such “homing agents” enable organization intrusion and mining (Elyashar, Fire, Kagan, & Elovici, 2013; Elishar, Fire, Kagan, & Elovici, 2012). The automation of such compromises means that this may be done at negligible cost and at mass scale (Boshmaf, Muslukhov, Beznosov, & Ripeanu, 2012). Hired (or manipulated) “sybils” or “paid participants” often come at much higher cost than automation, even when developing markets are exploited for such applications of human labor. Robot accounts (including sensor-based ones) are used for both benign and malign purposes.

Complete Chapter List

Search this Book: