AI in Education: Ethical Challenges and Opportunities

AI in Education: Ethical Challenges and Opportunities

Copyright: © 2024 |Pages: 16
DOI: 10.4018/979-8-3693-2964-1.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

As artificial intelligence (AI) continues to advance, its integration into the field of education presents both promising opportunities and ethical challenges. This chapter explores the multifaceted landscape of AI in education, examining the ethical considerations associated with its implementation. The opportunities encompass personalized learning experiences, adaptive assessment tools, and efficient administrative processes. However, ethical concerns arise regarding data privacy, algorithmic bias, accountability, and the potential exacerbation of educational inequalities. Artificial intelligence is a field of study that combines the applications of machine learning, algorithm production, and natural language processing. Applications of AI transform the tools of education. AI has a variety of educational applications, such as personalized learning platforms to promote students' learning, automated assessment systems to aid teachers, and facial recognition systems to generate insights about learners' behaviors.
Chapter Preview
Top

Introduction

Artificial intelligence (AI) began to be applied to education about 50 years ago and only a decade after AI itself was established as a research field in 1956 in a workshop at Dartmouth College in Hanover, New Hampshire, USA (Moor, 2006).In 1970, Carbonell's article “AI en CAI: An Artificial-Intelligence Approach to Computer-Assisted Instruction” described a semantic web-based tutor and creation system called SCHULARO for geography (Carbonell, 1970). This “Information Structure Oriented (ISO)” instructor differentiated his teaching strategy from knowledge of South American geography essentially by applying the layout of the other world geography and applying a teaching strategy or another teaching strategy. to the geography of South America. Furthermore, because its geographic knowledge was explicitly represented through semantic networks, the system could reason about its knowledge to make inferences that were not explicitly encoded, and also to answer questions about what it knew. Thus, its “mixed initiative” teaching strategy can include both a system that challenges the student using context and the meaning of questions, and a system that the student asks, both in very limited English. The system tracked which pieces of the geographic region the student understood, marking the significant parts of the semantic network, creating an evolving model of the student's knowledge. This adaptation to the student was one of the factors that distinguished this system from the computer-assisted instruction (CAI) systems that preceded it. The system also demonstrated what became the standard conceptual architecture for Artificial Intelligence in a Learner (AIED) systems..

The Early Days of AI in Education

An early collection of AIEd papers showed what could be achieved as early as a decade later (Sleeman and Brown, 1979). Among other things, this collection included articles on computer-assisted instructional systems in a game setting (Burton and Brown, 1979) and added expert system instructional rules to explain and teach expert system rules (Clancey, 1979)., a knowledge representation to capture the student's evolving understanding (Goldstein, 1979), an entry-level programming tutor (Miller, 1979), and a quadratic equation teaching system that administered tests to assess its teaching effectiveness and then updated itself as a result of teaching tactics (O'Shea, 1979).

Those early publications essentially mapped out what is now often called “learning tools,” a conceptual architecture, namely an explicit model of what is being taught, an explicit model of how it should be taught, an evolving learning model. understanding and skills and the user interface through which learner and system interactions communicate. Hartley (1973) gave an early definition of this architecture as follows, where (3) and (4) together are explicit instruction models, and the user interface was not mentioned because of its limited scope:

  • 1.A representation of the task

  • 2.A representation of the student and his performance

  • 3.A vocabulary of (teaching) operations

  • 4.A pay-off matrix or set of means-ends guidance rules (Hartley, 1973, p. 424)

The standalone nature of these early systems, their unsophisticated interfaces, and their lack of interest in collecting large amounts of learner data meant that many of the contemporary ethical issues around the use of AIEd were not in evidence.

From the start, the general field of AI has had intertwined scientific and engineering aspects (Buchanan, 1988). The scientific aspect of AI in education has concerned itself with questions around the nature of human learning and teaching, often with the goal of understanding and then duplicating human expert teaching performance. This aspect has focused largely on learner-facing tools but more recently has expanded into teacher-facing tools. The science has been pursued as a kind of computational psychology for its own sake or as a way to improve educational practice and opportunity in the world. The engineering aspect of applying AIEd has exploited a wide range of computational technologies such as Carbonell’s semantic networks, mentioned above, and more recently machine learning techniques of various kinds. This aspect of the work has pursued even wider goals that also include the development of educational administrator-facing tools.

Complete Chapter List

Search this Book:
Reset