Educational Responsibility in the Deepfake Era: A Primer for TPACK Reform

Educational Responsibility in the Deepfake Era: A Primer for TPACK Reform

Rebecca J. Blankenship (Florida Agricultural and Mechanical University, USA)
DOI: 10.4018/978-1-7998-6474-5.ch001
OnDemand PDF Download:
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Choosing the right technologies to match student learning outcomes in today's technology-integrated classrooms presents educators with multiple instructional design challenges including selecting appropriate technologies to match desired student learning outcomes. As students continue to have broad access to information from a variety of web-based platforms, teachers are increasingly tasked with ensuring the information used to complete key assignments is authentic and from a verifiable resource. As such, the era of deep fakes in images, audio, videos, and digital texts is more prevalent than ever as numerous programs using artificial intelligence (AI) can significantly alter original content to fundamentally change the intent of original content. A discussion of educational and pedagogic responsibility in the era of deep fakes will serve as the primer for reform of the TPACK construct with recommendations for remediating student work in which deep fake resources were utilized.
Chapter Preview
Top

Introduction

Choosing the right technologies to match student learning outcomes in today's technology-integrated classrooms presents educators with multiple instructional design challenges including selecting appropriate technologies to match desired student learning outcomes. As students continue to have broad access to information from a variety of web-based platforms, teachers are increasingly tasked with ensuring the information used to complete key assignments is authentic and from a verifiable resource. As such, the era of deepfakes in images, audios, videos, and digital texts is more prevalent than ever as numerous programs using Artificial Intelligence (AI) can significantly alter original content to fundamentally change the intent of original content. Because many of these AI-based programs are more user-friendly and easily accessible on the internet or mobile devices in a free or relatively inexpensive app, an increase in deepfake content is particularly noticeable within the last decade. Accordingly, educators are now tasked with employing best practices to not only teach basic digital literacy and citizenship skills, but also recognize how technology immersed learning environments interacts with deepfakes while equipping students with the tools necessary to recognize authentic and altered content. Thus, the purpose of this chapter is to explore techniques educators can use to mitigate deepfakes in key assignments and learning outcomes. Readers will explore the impact of using deepfake imagery, on teaching and learning as well as higher-cognitive processes leading to student learning outcomes. Further exploration of debunking deepfake content through guided techniques such as cross-referencing with non-digital resources will be discussed. An exploration of how deepfakes affect teacher and learners’ trust in common search and information resources such as Google, social media, and the like will be undertaken. Discussion of educational and pedagogic responsibility in the era of deepfakes will serve as the primer for reform of the TPACK construct filtered through the scaffolded lens of Vygotskian (1978) sociocultural theory as interpreted by Lantolf and Thorne (2006). Specifically, the notions of object, other, and self-regulation using three distinct scaffolding techniques aligned with TPACK subdomain dyads will be discussed as a potential remediation of deepfake content.

Canned messaging, stylized imagery, and visual manipulation of content are hallmarks of several image-driven industries including advertising agencies, news programs, and social media outlets. Image manipulation is one method used to advance a desired narrative. The distribution of altered or false information is certainly not a new concept. One does not have to venture too far in the immediate past to find examples of how advertising executives, news anchors, field reporters, and politicians have intentionally manipulated data, facts, and events to promote one agenda or viewpoint over another. With the continued rise in use of social media, online media, and mass communication outlets to distribute information and news, a new type of disinformation has emerged known as the “deepfake”. Deepfakes can assume many forms, but the basic definition is that of manipulating audio, images, or video using some form of Artificial Intelligence (AI) housed within audio and video editing software. For educators and researchers, particularly those in higher education, deepfakes are problematic in terms of ensuring learners are receiving accurate information to complete key assignments and achieve desired learning outcomes.

One of the hallmarks of education is access to relevant materials and resources that accurately reflect instructional content and anticipated student learning outcomes. For teachers, it is of critical importance that the information learners have access to and use to complete content-related tasks is sourced from legitimate, vetted resources accurately reflecting the desired information the student means to convey. However, a growing trend across the Internet is the deliberate misrepresentation of otherwise factual information to sway an end-user to adopt a position or line of thinking that may support a particular position or act in counter thereof. Manipulating images and text to deliberately misinform or mislead the intended audience, or the rising phenomenon known as deepfakes, has definitely impacted areas such as advertisement, politics, and social media. As research about the effects of deepfakes in these areas has risen significantly over the past few years, especially noted during their intense saturation during the 2016 election cycle, understanding how deepfakes have the potential of negatively affecting the fields of teaching and learning presents as an area in need of significant exploration.

Key Terms in this Chapter

Information Literacy: Defined as the ability to gather information from multiple resources to provide context for particular content.

Reproduction Literacy: Defined as the ability to create meaningful content based on curated digital resources.

Branching Literacy: Defined as the ability to contextualize information using a nonlinear, multidimensional approach.

Sociocultural Theory: Defined as a theory in psychology in which learners cognitively develop based on their personal and cultural interactions.

Scaffolding: Defined as the educational concept in which a learner receives additional instructional support around particular content.

Arial Scaffold: Defined as a scaffold that provides mobile support in the form of freely moving mechanism.

Supported Scaffold: Defined as a scaffold that is fixed using rods, platforms, and the like.

Self-Regulation: Defined as a learner negotiating content without object or other scaffolding.

Zone of Proximal Development: Defined as a theory in psychology identifying a learner’s ability level based on what s/he can or cannot do with scaffolded instruction.

Photo-Visual Literacy: Defined as how a learner interprets and applies meaning of photos and video to related content.

Technological, Pedagogical, and Content Knowledge (TPACK): Defined as a framework that explicates the interrelated skill sets a teacher needs to teach content using technology.

Digital Literacy: Defined as the set of skills needed for an end-user to learn, live, and work in a society that depends on technologies for access and communication.

Object Regulation: Defined as a learner being instructionally scaffolded using some object such as an app, computer, textbook, and the like.

Other Regulation: Defined as a learner being instructionally scaffolded by another individual such as a teacher or peer mentor.

Suspended Scaffold: Defined as a scaffold that is suspended using ropes, pullies, and the like.

Deepfake: Defined as the deliberate manipulation of media using AI driven software to skew the context of audio, images, or video in an effort to promote a particular point of view.

Complete Chapter List

Search this Book:
Reset