Hyperbole or Hypothetical?: Ethics for AI in the Future of Applied Pedagogy

Hyperbole or Hypothetical?: Ethics for AI in the Future of Applied Pedagogy

DOI: 10.4018/979-8-3693-0205-7.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Debates and sensationalized presentations of artificial intelligence (AI) across the media and in scientific and industrial contexts have shaped public perception of its potential benefits; but the profound potential for harm ought to be acknowledged. This chapter provides a theoretical insight into how AI can be objectively debated amidst the hyperbole surrounding its implementation and the potential for the inaccessible to be made accessible over forthcoming months and years. A new level of paradigmatic sufficiency in terms of underpinning future practice with due regard for the ethical philosophy and sociology within which it will be based.
Chapter Preview
Top

Introduction

‘By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.’

(Daniel Kahneman, b. 1934)

This chapter will consider applied ethics in the context of pedagogical practice in relation to the potential impact of Artificial Intelligence (AI) in education and other societal infrastructures such as the military and health and medicine. The extent to which AI has been progressively integrated into society over the last three years, has exponentially increased media and scientific debates of what is and what might be, if all we are to believe about AI is realised (Bareis & Jatzenbach, 2022). As with any landmark paradigmatic shift in society’s use of and access to technological advance, the widespread introduction of AI has both positive and negative aspects which can be harnessed for both human progression and decimation (Mikalef et al, 2022).

Initial debates surrounding AI focused predominantly on their functionalist capacities to reduce the complexity and challenge of largely physical tasks, where algorithmic decision making could be used as an adjunct support to mundane and burdensome human work (Mirbabaie et al 2022). This has now progressed to the widespread cognitive debates as to how AI can address tasks which before, seemed impossible and largely inaccessible, in terms of the decision-making processes necessary to undertake them (Madhav & Tyagi, 2022).

One of the key aspects of these debates is how humanity has often designed AI in human form, to the extent technological artefacts are often perceived as robots with a high degree of sentient ambition (Owe and Baum, 2021). It is often thought AI can compete for cognitive advantage rather than simply being a design artefact used to extend the reach of humanity’s applied intellect (Mele, & Russo-Spena, 2023; Yamin, et al, 2021). This has led to widespread hyperbole that AI somehow has the capacity to override human cognition and that its capacity for extended algorithmic thinking may eventually pose a huge threat to mankind, alongside offering some of the greatest technological developments of our age (Cools, Van Gorp & Opgenhaffen, 2022).

Unlike other technological advances whether the choice to engage with them was always an option, AI poses a wider societal issue where that choice may no longer be possible, should the self-advancement of algorithmic decision making pose an overriding threat to humanity in terms of speed and capacity for action (Igna & Venturini, 2023). As such the ethical principles of AI are factors that all organisations now must contemplate so that the integration of ethical practice becomes a societal norm in terms of the use of AI in practice. Beyond an anthropological perspective, social ethics and the philosophies underpinning them all impact upon the capacity of organisational decision making and how AI may remain fully controllable and where the algorithms within which it operates may be constructively aligned with those affective attributes of humanity, that as a society, we would wish to promulgate, rather than any degree of negativity (Henin & Le Métayer, 2021).

Key Terms in this Chapter

Existentialism: The philosophy of the nature of human existence as determined by capacity and capability for free will and free choice.

Epistemic Bias: The lens of subjective interpretation which influences systematic research practice due to failing to acknowledge or detail the ideals of human impartiality and value-freedom which may potentially be influencing it.

Agency: The capacity for action or intervention producing a particular effect.

Validity: Is the state of being officially true or legally acceptable.

Hacking: The gaining of illegal, unlawful, or unofficially unauthorised access to data within the context of computing and technology.

Autonomous Weapons: A classification of military systems which has the independent ability to search for, identify and engage strategic targets based upon pre-programmed constraints and restrictions.

AI Safety: AI safety pertains to the interdisciplinary field which prevents the misuse, accidental or otherwise, or other consequences which could be the resultant outcome of an AI system.

Algorithm: An algorithm is a process or set of rules to be followed in decision-making or other problem-solving operations, especially by computing technology.

Sentient/Sentience: Sentience is the capacity of a being to experience feelings and sensations.

Reliability: The extent to which a research instrument can repeatedly provide the same results in temporally separated incidences of measurement.

Complete Chapter List

Search this Book:
Reset