Redesigning Academic Debate Pedagogy for the World of Generative Artificial Intelligence
The growing prevalence of “generative artificial intelligence” (GAI), most recently accelerated by the release of ChatGPT, represents a fundamental transformation in human communication, learning, and knowledge production. From one perspective, GAI offers an opportunity to engage in pedagogical innovation by enhancing the classroom with tools aimed at helping students develop skills while reducing pressure on instructors (see e.g., Bauschard et al., 2023; Chance, 2020; Chen, 2023; Kelly, 2023). GAI also represents an invitation to augment human intellectual labor with digitally enhanced models of invention that value collaboration with humanity’s virtual progeny. Of course, such innovations must be accompanied by careful consideration of their ethical and relational implications (Wyman, 2023). Alternatively, GAI also represents a potential death knell for human education given the possibility of outsourcing task completion to digital agents. As Mark Massaro (2023), an English professor, recently wrote, “AI has infected higher education like a deathwatch beetle, hollowing out sound structures from the inside until the imminent collapse” (para. 2).
Moreover, the accelerating progress demonstrated by GAI invites ethical consideration about our relationship with the artificially intelligent beings now emerging around us (Cummings & Rief, 2023). Are they or will they ultimately become beings with rights deserving our consideration as thinking (even living) things? As science fiction narratives have consistently inquired: Should AI merely be a machinic thing at our service? Or, does it deserve to be part of our lives in a way that is equivalent to other humans? In addition, might we end up being at the service of or ultimately destroyed by AI? This is a likelihood once entertained only in fictional TV series and films (most famously the Terminator franchise). Now, the possibility of human “extinction” due to AI is being openly discussed in popular journalistic sources like the New York Times (Roose, 2023). Eliezer Yudkowsky (2023) of the Machine Intelligence Research Institute recently exclaimed, “Shut it all down. We are not ready [for AI]. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong. Shut it down” (paras. 33-35). Whether or not this alarmist rhetoric is anchored in reality, a new world of human and technological interaction, engagement, and hybridity is indubitably taking shape.
Addressing all of the challenges presented by AI is beyond the scope of this chapter. Instead, we hope to take on one dimension of the broader controversy it has occasioned. Specifically, our focus is on how, as university scholar-teachers, we can manage the changes that GAI has already brought and will inevitably bring to our classrooms and extra-curricular activities. We understand GAI as including programs such as ChatGPT, which can generate or create content with minimal prompting from humans. This aligns with the definition given by the Cornell University Center for Teaching Innovation (2023), which states that “Generative artificial intelligence is a subset of AI that utilizes machine learning models to create new, original content, such as images, text, or music, based on patterns and structures learned from existing data” (“What is generative artificial intelligence (AI)?”, para. 1). The central challenge educators face is the use of GAI by students to complete assignments, thus undermining the integrity of the educational enterprise.
From one vantage point, to recover some semblance of our now potentially fading educational integrity, we may need to devise more elaborate means of checking student work for GAI assistance. Unfortunately, as of right now, such technology is not all that effective (Baidoo-Anu & Owusu Ansah, 2023; Bauschard et al., 2023; Phare, 2023). Even if it becomes effective, it will likely need to be updated constantly to address new innovations. This raises the possibility of a pedagogical arms race, much like the one between cybersecurity experts and those actors constantly attempting to steal money and private information. From our perspective, such a race is futile and distracting. It could also lead to the loss of integrity within our shared teaching endeavor, primarily because it would not only redirect our pedagogical focus away from teaching and surveillance, but would also erode trust between teachers and students. Thus, we have a conundrum: how should we deal with the massive changes wrought by GAI without succumbing to the negative feedback loop of ineffective regulation, increasing technological surveillance, and ever-eroding trust and legitimacy?