Schrodinger's Deepfake: Multimodal Analysis to Combat Deepfakes

Schrodinger's Deepfake: Multimodal Analysis to Combat Deepfakes

Paul Siegel
DOI: 10.4018/978-1-7998-6474-5.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Deepfakes have become an emerging threat to media. Many authors have identified social and digital media as a multimodal artifact. Others have identified the weaponization of that same digital media. At times when media was weaponized before, students were taught persuasive techniques, not as an incentive to use them, but as a guard against those persuasive techniques taking advantage of the students. In this new age of digital, multimodal propaganda, adapting persuasive techniques will help to combat the dangers our society faces. This chapter will examine three different anti-propaganda interventions and examine their implementation against a deepfaked video.
Chapter Preview
Top

Introduction

The summer of 1969 was an important year for the American space race: The first manned missions would be landing on the moon’s surface. President Richard M. Nixon knew two possible outcomes to a manned space flight: total success or abject, costly, public failure. Nixon’s team drafted two separate speeches, one addressing the nation upon the mission's success, one addressing the nation about the failure. Until recently, that speech has never been spoken or recorded by Nixon. With the successful landing and return of Apollo 11, Nixon never had to give the failure speech. A small group of computer engineers, film producers, and professors at the Massachusetts Institute of Technology took the speech written for Nixon and created “In the Event of Moon Disaster” (2020). Then, using archival footage, they created a deepfake AI simulation of President Nixon delivering the speech. It was a success. President Nixon appeared on screen, with the same background, and said the words he had never said.

While this was an academic exercise designed to showcase the possibilities of the technology, it also demonstrated the limitations of current pedagogical practices to combat deepfakes. For example, imagine the issues that would occur if a current politician had their speeches, photos, or other media pieces maliciously modified and widely distributed. Would the world be able to tell the difference? Indeed, has the world adequately educated its students to understand this emerging threat?

Indeed, that event has come to pass. While researching different deepfakes for this article, a deepfaked video of President Trump was produced by Russian Times. Russian Times (RT), the state-owned Russian news agency (also identified as a propaganda arm of the Putin government (US Department Of State, 2020), put out a deepfaked video of President Donald Trump exploring the RT studios and appearing in broadcasts. The video used speeches and videos of President Trump taken out of context and redubbed to suit RT’s and, by extension, the Russian Government's purposes. For this paper, one can use it as an example of the potential harm that deepfakes can cause and how they can be combatted. A translation of any Russian subtitles or text has been included in Appendix A. In this chapter, the deepfake of President Trump will be examined and used to highlight how multimodal literacy and propaganda literacy can combat deepfakes. This paper will demonstrate how three different approaches to propaganda literacy- TAP, The ABCs of Propaganda, and Q/TIP- can be modified to meet the challenge of deepfakes in the media.

Figure 1.

Deepfake example taken from the Russian Times

978-1-7998-6474-5.ch008.f01
Top

Background

Understanding the Origins of Deepfakes

A deepfake is an AI-created simulation of someone manipulated to either create speech that that person never said or do something that that person never did (Westerlund, 2019). Deepfakes have been used to insert people into pornographic movies they were never in (Öhman, 2020) and used to manipulate the speech and words of political figures (see Del Viscio (2020) as an example). It has not been used to attack governments, but as the Russia Times video demonstrates, it is only a matter of time before that happens (Westerlund, 2019). With this rise in media usage, there has been a connected rise in educating students about how to comprehend these emerging technologies (Mills, 2010). This rise has created pedagogical advances in both the literacy and digital realm. However, as technology continues to outpace educational advances, educators must turn to the tools they have to combat these attacks. Educators must reconceptualize the trouble with deepfakes, not a lack of comprehension of deepfakes, but a failure to see deepfakes as a form of persuasion. Indeed, it is in persuasion (Hobbs, 2020) that one might find the best way to combat deepfakes: by drawing from the pedagogies of literacy, multimodality, and persuasive writing

Key Terms in this Chapter

Multimodal Literacy: Multimodal literacy is a term that originates in social semiotics, and refers to the study of language that combines two or more modes of meaning.

Digital Literacy: Digital literacy means having the skills you need to live, learn, and work in a society where communication and access to information is increasingly through digital technologies like internet platforms, social media, and mobile devices.

Propaganda Analysis: Is identification of ideology and purpose of intentionally altered media.

Artificial Intelligence: The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Deepfake: A video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information.

Complete Chapter List

Search this Book:
Reset