Ethical Considerations for Artificial Intelligence in Educational Assessments

Ethical Considerations for Artificial Intelligence in Educational Assessments

DOI: 10.4018/979-8-3693-0205-7.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the vital context of education, the application of artificial intelligence (AI) to assessments necessitates a nuanced examination of the boundaries between ethically permissible and impermissible practices. In this chapter, the authors applied a systematic literature mapping methodology to scour extant research, so as to holistically structure the landscape into explicit topical research clusters. Through topic modelling and network analyses, research mapped key ethical principles to different assessment phases in a triadic ontological framework. The chapter looks to provide researchers and practitioners the insights into the ethical challenges that exist across an end-to-end assessment pipeline.
Chapter Preview
Top

Introduction

Artificial intelligence in education (AIED) is the machine mimicry of human-like consciousness and behavior to achieve educational goals, through the use of technology that allows digital systems to perform tasks commonly associated with intelligent beings.

Of the three pillars of education, assessment exists as an important component, alongside pedagogy and curriculum (Hill and Barber, 2014). Within the AIED domain, Chaudhry and Kazim (2022) scoured the landscape and concluded that assessment is one of the four key sub-domains in AIED, alongside learning personalization, automated learning systems, and intelligent learning environments. In an educational context, assessment refers to ‘any appraisal (or judgment or evaluation)… of work or performance’ (Sadler, 1989). The infusion of artificial intelligence (AI) in assessments has grown significantly in recent years. Research on assessments related to digital education in the higher education landscape showed that AI and adaptive learning technologies have tripled between 2011 to 2021 and is likely to surpass immersive learning technologies as a prime research area in the near future (Lim, Gottipati and Cheong, 2022, p. 5). Among stakeholders, there is a consensus positive view that “AI would provide a fairer, richer assessment system that would evaluate students across a longer period of time and from an evidence-based, value-added perspective” (Luckin, 2017).

Infusion of AI in assessments also brings along its own set of concerns. AI implementation comes with technical and operational issues relating to system implementation. Arguably, these challenges have relatively lesser grey areas to contend with, than the complication of navigating the parameters and boundaries of ethics. Evaluators, as practitioners of assessments, will need to acknowledge, respect, and uphold ethical principles that may plague the implementation of an AI-based assessment.

The objective of this chapter is to examine the landscape of AI-related ethical issues for educational assessments, through the lens of a systematic literature mapping approach. A systematic literature mapping study is a study concerned with the mapping and structuring of a topical research area, the identification of gaps in knowledge, and the examination of possible research topics (Petersen, Vakkalanka and Kuzniarz, 2015).

This chapter investigates the following research questions:

  • RQ1: Where do the studies that discuss ethical issues relating to AI-based assessments arise from?

  • This question looks at where the studies discussing ethical issues relating to AI-based assessments arise from, studies patterns arising from exploratory data analysis, and seeks to provide recommendations (if any).

  • RQ2: What are the main AI use cases relating to assessments?

  • This question looks at AI applications in different areas of assessments, and how dominantly each AI application areas are featured in related studies.

  • RQ3: What are the main ethical issues arising from the AI implementations relating to assessments?

  • This question looks at the key ethical principles related to AI applications in assessments, and how dominantly each ethical principles are featured in related studies.

  • RQ4: What are the key themes of the systematic literature map?

  • This question looks to identify key themes of the systematic literature map, and draw up a framework to visualize and generalize the key themes for researchers and practitioners.

Through a systematic meta-analysis of existing literature, this chapter helps: (i) understand and consolidate knowledge regarding what was previously explored relating to AI-based assessment methods and their interconnected ethical issues, (ii) provide an integrated inquiry into the association of the ethical problems faced, and (iii) identify potential future research topics in the field.

Key Terms in this Chapter

Applied Ethics: The study of the practical application of philosophical tools to examine and provide solutions to real world morality issues.

Normative Ethics: The study of the moral rules and standards that guide how individuals, institutions and societies should behave in a moral sense.

Ethics Bluewashing: The implementation of superficial or misleading measures to appear ethical.

Ethics Shirking: The engaging of less ethical works over a period of time to lower the perceived resistance against such works.

Ethics Lobbying: The use of ethics to avoid or delay good and necessary regulation and enforcement.

Privacy: This ethics principle relates to the protection of data subjects against injurious effects from the use of personal information applied in AI systems, without unduly affecting regulatory compliance tied to privacy and restricting AI development.

Fairness: This ethics principle relates to fair, equitable and appropriate educational practices that should be perpetuated by AI systems.

Inclusivity: This ethics principle relates to inclusive and accessibility considerations applied to AI systems to meet different student needs in a personalized environment at scale.

Metaethics: The study of the nature (i.e., moral ontology), meaning (i.e., moral semantics), and the scope and knowledge to defend or support (i.e., moral epistemology) moral judgments.

Cheating: This ethics principle relates to dishonest and deceptive learner behavior to violate educational rules and regulations.

Consequentialism: A type of normative ethics that emphasizes that the outcome of an action defines the morality of an action.

Ethics Dumping: The export or import of unethical activities to a place with less strict regulations.

Auditability: This ethics principle relates to the permitting of independent third-party reviewers to audit, analyze and report findings relating to the usage and design of data and AI algorithms in education.

Deontological Ethics: A type of normative ethics that emphasizes on an individual’s rights and duties, including the presence of natural, absolute rights (i.e., natural rights theory), the presence of human rationality and inviolable moral laws (i.e., Kantian categorical imperative), and the morality of good actors arising from unbiases behind a veil of ignorance (i.e., contractualism).

Artificial Intelligence in Education (AIEd): The machine mimicry of human-like consciousness and behavior to achieve educational goals, through the use of technology that allows digital systems to perform tasks commonly associated with intelligent beings.

Explainability: This ethics principle relates to the lowering of opacity relating to data, AI algorithms and AI-driven decisions, the justification of its use, and the communication of details in a non-technical easy-to-understand manner to relevant stakeholders.

Trust: This ethics principle relates to the placing of confidence on AI systems and the provision of data to achieve educational objectives.

Accuracy: In the context of educational assessments, this ethics principle relates to the reliability and validity of assessments when an AI system is applied.

Ethics Shopping: The picking and choosing of ethics principles that are justified as a posteriori and retrofitted to pre-existing behaviors.

Virtue Ethics: A type of normative ethics that emphasizes the inherent disposition of an individual, and not specific actions.

Human Centricity: This ethics principle relates to the aim towards upholding human agency, dignity and autonomy, minimization of harm (and when necessary, weighed against a greater good), and equitable distribution of benefits.

Accountability: This ethics principle relates to the responsible discharge of AI ethics when designing and delivering AI systems, depending on the roles and contexts, in a consistent manner.

Complete Chapter List

Search this Book:
Reset