Exploring Totalitarian Elements of Artificial Intelligence in Higher Education With Hannah Arendt

From the perspective of political philosophy, this article examines the extent to which artificial intelligence (AI) applications in higher education contain totalitarian elements. Drawing on the theoretical considerations of Hannah Arendt, the author first identifies the key characteristics of total domination and then relates these to two AI applications in the field of higher education: adaptive learning systems and AI-based text generators. On this basis, the article elaborates on the similarities between concrete AI technologies and totalitarian structures. Finally, the author formulates questions that can be used to examine if concrete AI applications exhibit totalitarian traits. The aim of this theoretical contribution is to provide a perspective that will help to identify new dangers of AI or to see already known dangers in a new light, leading to a deeper and broader discourse on the consequences of AI.


INTRodUCTIoN
AI-based systems are revolutionizing the relationship between humans and machines in a wide range of social areas (Grace et al., 2018).Particularly in the field of higher education, a major debate about the use of AI in teaching-learning settings has intensified, especially with the release of ChatGPT 3 in November 2022.Thus, ChatGPT has suddenly led to every university teacher being confronted with AI in their courses and having to think about how teaching and exams can look like with the new AI technologies (Lo, 2023;Gimpel et al., 2023).In this context, the following question is central: To what extent can and should intelligent algorithms help to shape learning and teaching and take on tasks in the tertiary education sector?In addition to this question, there is also a need to discuss more general dangers of the use of AI in higher education, such as how AI is fundamentally changing learning in students' lives, and what risks this entails (Popenici & Kerr, 2017).
Reflections on general attitudes toward new technologies are philosophical and try to clarify under which circumstances and conditions AI should be used in different social contexts (e.g., in education).They aim to find ethical guidelines for the use of AI in social fields.In addition to the ethical debate, Coeckelbergh (2018, p. 5) argues that the use of AI can also be considered from the perspective of philosophies and political theories, "which offer theoretical resources that support more awareness and understanding … of our thinking about technology, the good life, and society."In this article I take this consideration as its starting point by placing two recent AI applications from the field of higher education in the context of totalitarian rule.In doing so, Hannah Arendt's (1951;2017) reflections on totalitarianism serve as the theoretical framework because her work is characterized by its openness and does not represent a self-contained, unified system of thought whereby individual theoretical considerations can also be related to other contexts (Gordon & Becevel, 2021).In this respect, Arendt's reflections have received a great deal of attention, particularly in the field of higher education (e.g., Nixon, 2020;Jahn, 2017), which is why they also prove suitable for this study.Through this approach, a new perspective can be taken that helps identify new dangers of AI or to see already known dangers in a new light, leading to a broader and deeper discourse on the consequences of AI.The decision to examine the extent to which certain AI applications in higher education exhibit totalitarian features stems from the fact that the fundamental criticisms of AI applications often address issues similar to those found in totalitarian structures.For example, AI cannot justify the decisions it makes and is not transparent; AI also can lead to a reduction in human connectedness (Lockey et al., 2021) and is already seen as a threat to democracy (Coeckelbergh, 2022).
The article is divided into four sections.In the first section, I describe Hannah Arendt's reflections on totalitarianism (Arendt, 1951;2017) and elaborate on certain characteristics of this form of rule.In the second section I introduce the topic of AI and outline current AI applications in higher education.In the third section, I define adaptive systems and AI-based text generators as concrete examples of using AI in teaching-learning settings and link them to Arendt's theoretical assumptions.The analysis focuses on the extent to which certain key features of AI applications fulfill the characteristics of totalitarian rule.In conclusion, in the fourth section, I discuss the results, formulate initial recommendations, and point out limitations.The discussion aims to highlight the fundamental dangers of AI technologies and encourage critical reflection on AI in higher education.

HANNAH ARENdT'S REFLECTIoNS oN ToTALITARIANISM
Hannah Arendt was a political thinker (Arendt, & Gaus, 1964;Weißpflug, 2019) whose reflections can be applied to different contexts and are relevant to different disciplines.In the current AI debate, Arendt is also often used as a theoretical reference point when the impact of new technologies on politics and society are examined (e.g., Gordon & Becevel, 2021;Leins, 2019).Particularly in the context of AI being described as a threat to a democratic society (Coeckelbergh, 2022), Arendt's reflections on totalitarianism help to provide a different perspective on the new technology.
Nazism and Stalinism are the starting point for Arendt's engagement with the reflection of totalitarianism.To this end, Arendt adopts different perspectives.For example, in The Origins of Totalitarianism (Arendt, 1951;2017), The Human Condition (Arendt, 1998), and Between Past and Future (Arendt, 1961), the subject is examined from philosophical, political, and historical views and shows an enormous complexity (Canovan, 1992, p. 17).
Arendt understood totalitarianism as the absolute rule of a system in which political action is no longer possible and the world has become meaningless (Tassin, 2011).Arendt distinguished totalitarianism from forms of state such as dictatorship or tyranny.She argued that it cannot be captured or understood in terms of traditional political theories because totalitarianism represents "a break with all our traditions" and that totalitarian actions "have clearly exploded our categories of political thought and our standards for moral judgment" (Arendt, 1954, pp. 309-310).
One of the central features of totalitarian systems is the use of terror.Arendt described terror as a way of life in which every human being is absolutely powerless (Tassin, 2011): Terror destroys human interaction on all levels (political, social, and private) and erases all spontaneous action between people (Arendt, 1951;2017;pp. 435, 496, 506).The lack of interaction simultaneously alienates individuals from their fellow human beings and the shared world; they lose the possibility of appearing to each other as individual persons (Jaeggi, 2011).The powerlessness goes so far that people even lose their sense of self and are no longer able to think independently (Arendt, 1951;2017, p. 500).Their lives are characterized by complete desolation (Arendt, 1951;2017, pp. 506ff.).Thus, human beings are understood only in terms of certain generic characteristics and taken as a prototype: Terror as the execution of a law of movement whose ultimate goal is not the welfare of men or the interest of one man but the fabrication of mankind, eliminates individuals for the sake of the species, sacrifices the 'parts' for the sake of the 'whole.' (Arendt, 1951;2017, p. 496) Totalitarian rule and the exercise of terror are based on the claim to follow unquestionable laws of nature or history.Arendt described this approach as totalitarian ideology (Arendt, 1951;2017, pp. 420;491f.).Totalitarian ideologies are closed systems of explanation that have a general validity for life and the world (Tassin, 2011).Accordingly, they claim the absolute explanation of the world, which includes the past, present, and future (Arendt, 1951;2017, p. 503).With the help of radical ideologies in totalitarianism, people are deprived of their own will, their ability to judge, and the possibility of a self-determined life; they are robbed of these things to fulfill an already determined destiny.As a result, the life of the individual person no longer has any meaning (Arendt, 1951;2017, p. 485;Tassin, 2011).In this way, totalitarianism and terror become ends in a self-evident sense, while the human being is now only a means to a purpose (Arendt, 1951;2017, p. 468).
This total oppression robs human beings of their primary characteristics and ultimately renders them superfluous (Canovan, 1992, p. 25).Tassin (2011) referred in this context to a mad domination characterized by meaninglessness.To conceal their own meaninglessness and maintain total control, totalitarian systems are constantly in flux, altering realities by creating new ones.Mixing reality and fiction causes people to lose faith in their own experiences and destroys common sense (Arendt, 1951;2017, p. 468;Canovan, 1992, p. 55).
The aim of totalitarianism can suppress all human beings and prevent all individual action and thought.In this way, people in totalitarian systems are so disconnected from themselves, others, and the world that they can destroy everything: [I]n their effort to prove that everything is possible, totalitarian regimes have discovered without knowing it that there are crimes which men can neither punish nor forgive.When the impossible was made possible it became the unpunishable, unforgivable absolute evil which could no longer be understood and explained by the evil motives of self-interest, greed, covetousness, resentment, lust for power, and cowardice; and which therefore anger could not revenge, love could not endure, friendship could not forgive.(Arendt, 1951;2017, p. 489) Hannah Arendt's reconstructed reflections highlight the central role of terror and ideology in totalitarianism.In this context, terror and ideology build on each other and are interdependent.Although terror destroys the external space in which people can meet and act together, radical ideology, with its universal, consistent logic, destroys people's inner freedom and prevents them from connecting with reality (Meints-Stender, 2011, p. 111).In addition to terror and ideology, the previous summary reveals other key features of Arendt's conceptualization of total domination (compare Table 1); namely: • The elimination of spontaneous, interpersonal action (Arendt, 1951;2017, pp. 435, 496, 506).
• A monopoly of power that does not allow for any other worldview or opposition (Arendt, 1951;2017, pp. 295, 491).• The view of the human being as a means to a purpose (Arendt, 1951;2017, p. 468).
• Meaninglessness and superfluousness of human action (Arendt, 1951;2017, pp. 485, 496).These characteristics are both preconditions and results of terror and ideology, and they are the foundation of the following analysis.In contrast, terror and ideology cannot be analyzed.The reason for this is that terror and ideology can be verified only when a system is completely controlled at all levels of society.Thus, the aim of the study is not to equate AI applications with totalitarianism, but rather, to highlight totalitarian tendencies and draw attention to possible dangers in the use of AI in higher education.In the following sections, I compare AI applications in the context of higher education with the elaborated criteria to identify possible political dangers of AI-supported learning and teaching.Karfft et al. (2020, p. 77), referring to OCED, provided a general definition of AI:

AI dEFINITIoN ANd AI APPLICATIoNS IN HIGHER EdUCATIoN
An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.AI systems are designed to operate with varying levels of autonomy.
This definition is also the basis for the use of AI in education (AIEd).Baker et al. (2019, p. 10) defined AIEd as "computers which perform cognitive tasks, usually associated with human minds, particularly learning and problem-solving."Although AI is now also considered a key topic in higher education and is increasingly established at universities, especially through various research projects (e.g., Zawacki-Richter et al., 2019) or new AI tools such as ChatGPT, Bates et al. (2020) argued that AI applications in teaching-learning contexts receive less attention than in other areas of society.Bates et al. (2020) saw two reasons for this reluctance: first, the general attitude of education toward new technologies and changes, and second, a behaviorist orientation of AI applications in teachinglearning contexts.The latter focuses primarily on the presentation and verification of knowledge and does not consider the complexity of higher education (Bates et al., 2020).In this context, Bates et al. (2020) referred to the systematic review by Zawacki-Richter et al. (2019) that elaborated that research on AIEd is mainly conducted by computer scientists rather than educational scientists.This leads to a very model-and production-oriented view of education, which simplifies teaching and learning and does not do justice to the complexity of higher education.
In addition to these arguments, the European Commission (2022) considers the use of AIEd to be highly risky and insists on strict requirements for risk assessment and mitigation, which (rightly) makes the use of AIEd more difficult.Regarding AI-assisted teaching and learning, Bates et al. (2020, pp.10f.) also called for the following aspects to be considered or implemented: [D]eveloping the capacity to avoid bias and to ensure diversity, protect privacy, develop transparent data policies, integrate regular ethical data impact assessments of the systems adopted and treating personal data as a fundamental right.…This brief overview highlights current concerns and critiques of the use of AIEd and shows that there is a need for more theoretical engagement with these issues.Despite these difficulties and challenges, however, AI applications are having a growing impact on teaching and learning in universities, largely owing to the development of AI-based tools such as ChatGPT (Gimpel et al., 2023;Lo, 2023) that have an external impact on universities.
A distinction can be made between two types.The first is AI applications that are officially implemented in teaching; that is, approved by the university.Current examples of this type include intelligent tutoring systems, adaptive systems and personalization, profiling and prediction applications, and assessment and evaluation tools (Zawacki-Richter et al., 2019).Such AI applications aim to develop personalized learning pathways for students, increase student success, reduce learning times, and identify at-risk students (Crompton & Burke, 2023;de Wit et al., 2020).

ARE THERE ANy ToTALITARIAN ELEMENTS IN AI TECHNoLoGIES?
Through the ambivalent illustration in the previous section, I have proven how different AI applications can be used in teaching-learning contexts.To explore how AI technologies in higher education share similarities with totalitarian features, I use two examples to describe the use of AI: adaptive systems and AI-based text generators.This approach makes it possible to provide a specific answer to the questions of whether and in what way the selected AI applications contain totalitarian elements.In this context, I selected adaptive systems and AI text generators to investigate a university-based AI application and a commercial AI tool.

Adaptive Learning Systems and Totalitarian Characteristics
Essentially, Swertz (2018) defined an adaptive system as an algorithmic architecture characterized by complexity using learning analytics and/or educational data mining to collect, analyze and derive results from educational data.These data are generated through interaction with the system (e.g., answering quizzes, and interactions with integrated chatbots), but also through click behavior and written exchanges with other students or teachers (Leineweber & Wunder, 2021).In their systematic review, Zawacki-Richter et al. ( 2019) identified adaptive systems as a current AI trend in higher education and point to different directions of this application.Adaptive systems are used to create personalized learning materials and to represent knowledge.Another central aspect of adaptive systems is the monitoring and guidance of students based on the interpreted data.In this context, de Witt et al. (2020) showed that adaptive systems are particularly suitable for teaching and testing basic knowledge or the acquisition of language skills, which confirms their behavioral orientation.Adaptive systems are also used to support teachers in the creation and design of their teaching formats.
The presented definition showed that adaptive systems in the educational context revolve around the optimization of individual learning outcomes (Castañeda & Selwyn, 2018;Leineweber & Wunder, 2021).The focus of the adaptive learning environment is thus a predefined goal (target value).The system orientates itself to this goal and calculates the student's current performance (actual value) based on it.The adaptive system attempts to bring the current actual value as close as possible to the target value, built on goal-oriented personalization (Leineweber & Wunder, 2021).In the logic of the adaptive environment, learning and students are only a means to achieve the predefined goal.Spontaneous learning moments, or analog learning experiences (such as an exchange with a fellow student, looking up a topic in a book, or a spontaneous idea for an essay), cannot be captured by the logic of the adaptive system.These moments have neither value nor meaning within the AI application.
As a result, teaching-learning contexts in adaptive systems lose their claim to be ends in themselves and spontaneous.This experience, according to Castañeda and Selwyn (2018, p. 5), can lead teachers and learners to pursue only a goal set by the AI, making them and their learning experience more and more a means to achieve the goal of the adaptive system: Digital technology places students in personal formative cycles and individual feedback loops.Individuals-students alongside lecturers and academics-are expected to become industrious selfimprovers, driven by external goals and striving to improve one's own performance.
This illustration shows that adaptive systems, like totalitarian thinking, see students and learning as a means to a purpose determined by AI, in which students have no freedom for spontaneous or serendipitous learning experiences.
Closely related to the goal of personalized teaching and learning through adaptive systems is the question of how the focus on individual learning progress affects interpersonal relationships.For example, the tendencies toward individualization and optimization in adaptive systems could lead to a situation in which students are more interested in their individual progress and in which interpersonal exchange and collaborative action take a back seat (Castañeda & Selwyn, 2018).Clayton and Halliday (2017, p. 299) already feared a reduction in social exchange owing to the increasing digitization of educational processes: [U]niversity is largely about training for citizenship in ways that require interaction with peers from different social groups.…If digitisation threatens the sharing of space in the learning process, it may destroy one of the most valuable means through which societies pursue integration.
Because adaptive systems focus much more concretely on individual learning and are designed to reduce the workload of teachers (de Witt et al., 2020;Watanabe, 2022), they form a clear contrast to social learning cultures in which students interact with communities or teachers.Wunder (2021) saw the marginalization of human exchange by new educational technologies as the loneliness of the student, going so far as to compare him or her to the literary figure of Robinson Crusoe.
These overlaps indicate that adaptive learning environments, like totalitarianism, do not consider interpersonal action.Another aspect that underlines this claim is the lack of spontaneity in adaptive systems.For example, according to Arendt, collaborative action is characterized precisely by its unpredictability and uncontrollability (Arendt, 1998, p. 232), while predictability and control are two pillars of AI-based learning environments (Leinweber & Wunder, 2021).Therefore, joint, spontaneous learning has no place in adaptive systems and leads to students acting more with an artificial system than with their fellow human beings, leaving out the interpersonal, as in totalitarian structures.
Although adaptive systems enable personalized learning with the help of individual learning materials (Castañeda & Selwyn, 2018;Leineweber & Wunder, 2021), AI-supported algorithms work according to the opposite principle.Thus, the basic procedure of AI-supported computations is to find general rules in datasets (Gimpel et al., 2023;Kirste & Schürholz, 2019).In particular, when monitoring academic performance, adaptive systems divide students into specific categories based on data-driven clusters.The unambiguous classification leads to stereotyping (Büching et al., 2019) whereby students are captured by the adaptive learning environment only via certain generic characteristics and assigned prototypes.Adaptive systems thus enable personalized learning, but they cannot understand the value of individuality, and individuality is also not compatible with the inner logic of the system.In this way, like totalitarian systems, they are incapable of understanding the true and pluralistic nature of the human being.
Adaptive systems are building a new digital learning reality based on real-time calculations of student data.The collection of various data, some of which are generated unconsciously (and their AI-based analysis), promises a high degree of objectivity in the assessment of learners.Adaptive systems thus provide students and teachers with a comprehensive and explicit assessment that summarizes the learning process based on increasing or decreasing value (Leineweber & Wunder, 2022;Mau, 2017, p. 27).Thus, Nowak et al.'s (2018, p. 30) inferences about AI-supported applications can also be related to adaptive learning environments: [S]tatistically better performance of neural networks over human experts raise the temptation to replace human judgment and decision-making with neural networks not only for simple tasks but also for complex decision-making and judgement tasks such as employment decisions and political and business strategic choices.
In this context, the general black-box phenomenon of self-learning algorithms (Adadi & Berrad, 2018) means that the calculated output values cannot be verified (Herzberg, 2023).(Note: There are many efforts towards so-called explainable AI, in which data-driven programs provide explanations for their decisions [Rai, 2019;Adadi & Berrad, 2018].However, it is questionable to what extent this requirement is even feasible in the field of AI [Herzberg, 2023].Dyson [2019] went one step further and stated that systems capable of intelligent behavior are too complex to be fully understood by humans.)Through this autonomy, adaptive learning environments make an irrefutable claim to truth (Bächle, 2016, p. 25).They thus demand a monopoly of power that does not allow for any other consideration or verification within the system.Wunder (2018) even went so far as to portray progressive digitization with its intelligent educational technologies as having no alternative and to equate it with a law of nature.For adaptive learning systems, which are only one part of a complex learning architecture, this seems somewhat exaggerated.Nevertheless, the brief analysis shows that adaptive learning systems tend to promote power monopolies and create learning realities based on calculations.
When the adaptive system is compared with selected features of totalitarianism, striking parallels emerge underlining the totalitarian character of these AI applications: Adaptive systems see students as a means to a purpose, reduce human interaction, group students according to certain characteristics, and represent a self-contained system that can neither be controlled nor allow for a different point of view.However, the features of meaninglessness and mixing reality and fiction are not present in adaptive systems because they have clear benefits for higher education teaching (including the promotion of personalized learning) and base their statements on statistical data.Although it could be argued that adaptive systems have a very limited view of reality, ignoring the spontaneous and social aspects of learning, this is not sufficient to speak of a loss of reality through adaptive systems.
This analysis provides an impetus to think more fundamentally about whether universities should promote the development of adaptive systems and whether individualized and automatic learning is desirable for university learning.The theoretical comparison thus also reveals the political significance of the use of AI in different areas of society, which should not be forgotten alongside technological enthusiasm or legal and concrete ethical problems.Universities, as public spaces, must also think about the political consequences of new technologies, such as AI, and take responsibility for their students in this sense.This is especially the case for adaptive systems because unlike certain AI tools such as text generators, adaptive systems will be institutionally embedded in higher education teaching and universities, and they are therefore particularly in a state to deal with the possible dangers.

Text Generators and Totalitarian Characteristics
In contrast to adaptive systems, totalitarian features such as the destruction of spontaneous human action, the creation of a monopoly of power, and the classification of human beings as prototypes or means to an end are less central or are not being addressed.The reason for this decision is that text generators are very diverse and used in different industries (e.g., marketing, sales, journalism) (Limburg et al., 2022) and therefore cannot be reduced to a specific context of the application.Rather, the analysis shows that the use of text generators can lead to a loss of reality and experience in students' writing and thus academic writing loses its meaning, especially in examination situations.Furthermore, the superfluousness of human writing and evaluating texts is being discussed.
AI-powered text generators based on Open AI's GPT model are currently of concern to many teachers and education researchers owing to the rapidly increasing quality of text production.(Note: GPT is an AI-powered language model based on a deep learning architecture and trained on input data [e.g., digitally provided documents].Various platforms [Open.AI, Headlime.com, Copy.ai, Aleph Alpha] integrate the GPT model, enabling various tools to support the writing process, including automatic text generation [Gimpel et al, 2023;Limburg et al., 2022]).McKnight (2021, p. 442) summarized the problem: With artificial intelligence (AI) now producing human-quality text in seconds via natural language generation, urgent questions arise about the nature and purpose of the teaching of writing in English.Humans have already been co-composing with digital tools for decades, in the form of spelling and grammar checkers built into word processing software.Yet AI has now advanced such that humans need to have less input in the writing process.
As the independent writing of individual results is central to both study and research, AI-based text generators pose a challenge to teachers and educationalists.In particular, the concept of plagiarism is discussed in this context.The Office of Research Integrity (ORI) described plagiarism as misconduct in the research process and defined it as follows: "Plagiarism is the appropriation of another person's ideas, processes, results, or words without giving appropriate credit" (ORI).Limburg et al. (2022) showed that according to the current definition, plagiarism does not exist when texts are created with an AI-based text generator mainly because the text is produced uniquely and is not a copy of another text.In this context, Dehouche (2021, p. 19) called for a new definition of plagiarism and showed that authorship must be renegotiated for automatically generated texts.
To make matters worse, both teachers and plagiarism software have difficulties recognizing automatically created texts (Luitse & Denaka, 2021).Text generators offer great potential for appropriative abuse, especially in examination contexts in which students are supposed to use their texts, showing their capability of independent summarizing and interpretation of research results (Limburg et al., 2022).This potential for abuse is increased by the fact that students perceive writing as a major difficulty in their studies (Badenhorst et al., 2015;Ginting & Barella, 2022).
As a result, there are currently many considerations in educational discourse about how to deal with AI-based text generators in higher education.Kumar et al. (2022) called for an obligation to label automatically generated texts, while Francke and Bennett (2019) advocated for raising awareness among teachers.Wessels (2023) suggested the development of new examination methods (e.g., more oral examinations).Meanwhile, Reinmann (2023) wondered whether academic values might need to change, with academic writing once again becoming more of a means of purpose.Otsuki (2020) took a positive stance on this issue, arguing that text generators should be seen as an opportunity to rethink academic writing.
According to Frye (2020), students use AI text generators to produce a text without (or with little) effort or expertise of their own, or to produce an error-free, grammatically correct text.To produce text, text generators such as ChatGPT work with large language models.Large language models calculate the probability of which word follows a particular text sequence and compose their texts based on these calculations (Gimpel et al., 2023;Li, 2023).Therefore, AI text generators, do not generate text on a thematic basis or through actual understanding or thinking, as humans do, and Li (2023, p.1) described them as "stochastic parrots."It is also possible for AI text generators to hallucinate statements that are consistent with the system's internal logic, but not based on any true context or source.This approach creates the risk that AI-generated text fragments will lead to disinformation and present a false picture of certain facts (Li, 2023;Lo, 2023).Especially for students in their first semester or in interdisciplinary fields where there is not yet a deep knowledge of a discipline or the standards of academic work, the use of AI text generators can lead them to be influenced by false information or to misjudge the complexity of the topic.In addition, text generators are particularly good at solving simple writing tasks because they are mainly used in the first semesters.Students who use text generators at the beginning of their studies do not develop basic skills in academic writing and are likely to fail at more complex tasks later.They also lack concrete knowledge of academic standards for writing or citing sources, which could make it difficult for them to spot misstatements or contradictions in automatically generated texts (Malinka, 2023).However, it is generally hard to judge the veracity of an AI-generated text because of the invention of statements and missing or invented sources.It is precisely this mixing of fact and fiction that shows how students can abandon the credo of academic work, which is characterized by comprehensibility and transparency, and how a new reality can emerge through texts from an AI.The power of this mixing is demonstrated, for example, by AI text generators that create "pertinent, but non-existent academic reading lists" (Li, 2023, p.1), showing that text generators cannot understand the actual meaning of sources and scientific work from scratch.This illustrates how AI systems create a new reality, similar to totalitarian systems, and thus pose a great threat to academic writing.In this context, a parallel can also be drawn with the characteristic of power monopoly, as text generators produce certain answers that they present as the truth, thus excluding other perspectives.The extent to which text generators operate in their closed system is illustrated, for example, by the fact that ChatGPT3 can access data created only before 2022 (Baidoo-Anu & Owusu Ansah, 2023;Lo, 2023).
Apart from the risk of students not being able to distinguish between reality and fiction in texts, the use of text generators deprives students of their own writing experience, rendering academic writing meaningless.It becomes meaningless because students do not experience basic skills in the writing process, such as organizing and formulating thoughts in a scientific tone and style or realizing that an academic essay requires multiple revisions and is never perfect.They are merely editors of a text produced based on mathematical calculations and, at best, experience the writing process only by rewriting and correcting a text.As writing and thinking are interdependent, critical inquiry is also suffering from a lack of writing process (Sumarno et al., 2022).Sociologist Nicolas Luhmann's famous statement that you cannot think in a sophisticated and coherent way without writing (Luhmann 1981, p. 222) seems questionable since the emergence of ChatGPT because text generators show students, teachers, and researchers that writing and thinking are no longer necessarily linked.The use of AI text generators is a misappropriation of academic writing because it does not develop students' writing and thinking skills, nor does the text possess intentionality or truthful opinion, relying only on statistic calculations.Thus, both totalitarian rule and the artificial production of academic texts are characterized by their lack of meaning.Another question to consider is whether text generators are replacing interpersonal action between students and their peers and teachers.For example, students may prefer to have topics explained to them in a conversation with ChatGPT (Lo, 2023) rather than talking with peers or their teachers, or to use a text generator rather than writing their text and discussing it with others.
An additional characteristic of totalitarianism is that people become obsolete and are no longer needed under its rule.On a smaller scale, the same phenomenon applies to text generators, which also render the human ability to write texts superfluous.How dispensable humans can become in the writing process can also be illustrated by a function of Open AI's ChatGPT chatbot (https://chat.openai.com/).With the Chat GPT 3 or 4, in addition to producing coherent texts, the AI-supported tool can also evaluate texts according to defined criteria (Wessels, 2023).Therefore, students already can write an academic essay using text generators, and teachers can evaluate it using an AI application, thus creating a teaching-learning context in which human intervention becomes superfluous.This example shows that academic writing and the educational principles behind it can be made absurd by a single AI application.
The comparison demonstrates how an AI application can lead to a massive questioning of academic writing, and thus illustrates the great influence that AI is already having on university teaching.This is even more frightening because AI text generators are only a function and not a self-contained and complex system.In addition to the practical problems of using text generators, university teachers and educationalists now need to ask themselves general questions about the future and the sense of writing.Furthermore, they need to discuss the deeper meaning of the human writing process with their students.Otherwise, human academic writing will become increasingly irrelevant and eventually obsolete; consequently, students will fail to learn one of the most important skills of academic education.
The analysis shows that different features of totalitarianism can be identified in adaptive systems and AI-based text generators.To illustrate this, the main findings are summarized in Table 1.✓ The use of text generators reduces students' experience of writing their own texts and decreases their writing and thinking skills, making academic writing meaningless.
✓ Text generators can do the writing for students and the correcting for teachers (at least for the most part), thus creating teaching-learning environments in which humans become superfluous.

CoNCLUSIoN
This article explored the extent to which certain AI applications in higher education have totalitarian characteristics and identified key similarities between adaptive systems, text generators, and totalitarian structures.As a result, this article suggests while adaptive systems are characterized by the minimization of human agency, the view of students as prototypes and means to a purpose, and monopolistic power structures, AI text generators mix reality and fiction, rendering the academic writing process meaningless and superfluous.
The link between totalitarianism and the use of AI in higher education, as presented in this paper, makes it clear that the consequences of new technologies must be discussed in legal or ethical terms and that the political dimension must also be taken into account.Decision-makers in higher education (but also in other social and public institutions) need to develop an early awareness of the political and social dangers of new technologies and, on this basis, decide whether and to what extent AI-based technologies should be introduced into the public sphere.This does not mean ruling out AI in general, but recognizing the dangers it poses and taking early action.One solution, for example, might be to offer alternative learning formats that focus on spontaneous and interpersonal exchanges when an adaptive system is introduced, thus counteracting the minimization of human action.Or, in relation to AI text generators, the problem of fiction and reality could be addressed in courses on academic work, which would give students an enlightened approach to generated text from the beginning of their studies.Because AI applications can be very different and always depend on the context in which AI is used, there is no general set of rules for dealing with totalitarian structures in AI applications at this stage.Rather, decision-makers and AI practitioners need to reflect on the political consequences of the specific AI application, to identify and openly communicate problems, and then consider whether the AI application should be implemented and what measures can be taken to counteract totalitarian tendencies.The following questions can provide AI researchers and practitioners a first orientation for assessing concrete AI applications beyond the educational sector and help them identify social and political consequences: • Does the AI application prevent or minimize interpersonal actions?• Can the decisions and statements of the AI applications be checked, and if they cannot, to what extent is there a danger of a monopoly of power?• Are humans captured as prototypes in the AI application?• Does the AI application lead to people becoming a means to an end?• Does the AI application represent reality in a one-sided way, or is there a mixing of reality and fiction?• Can the use of an AI application lead to the loss of meaning in an area that is under its influence?• Does the AI application make human action superfluous?These questions have been asked before, but their relation to totalitarian structures gives them new meaning and a raison d'être: They now provide a theoretical basis for identifying totalitarian features of AI and for stimulating reflection on the political consequences of concrete AI applications.However, especially in the field of political philosophy, questions do not necessarily lead to clear answers, but often to further questions.The ability to ask questions critically and openly is particularly important in relation to AI applications so that educators become aware of possible dangers and maintain a critical view alongside the enthusiasm for new technological possibilities.

CoMPETING INTERESTS
I declare that there are no competing interests.

FUNdING AGENCy
I acknowledge support for the article processing charge by the Open Access Publication Fund of Hamburg University of Applied Sciences.