Thou Shall Not Kill: The Ethics of AI in Contemporary Warfare

Thou Shall Not Kill: The Ethics of AI in Contemporary Warfare

Copyright: © 2024 |Pages: 15
DOI: 10.4018/978-1-6684-9467-7.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter aims in the presentation of the evolution of AI and robotic technologies with emphasis on those for military use and the main strategic agendas of various superpowers like USA, China, and Russia, as well as peripheral powers. The authors also refer to the uses of such technologies in the battlefield. The chapter also reveals the ethical dimensions of the current military AI technologies. It starts with the Mark Coeckelberg paper, to emphasize his call for a new approach to technoethics. Then, the authors will strive towards the ethical theory Neil C. Rowe, and his propositions for ethical improvement of algorithms. Finally, the authors pose the notions of electronic personhood proposed by Avila Negri, also touching upon the fact the legal debate tends to face an anthropomorphic fallacy. To conclude, Thou Shall Not Kill, the highest ‘'Levinasian Imperative'' closes the gap of the anthropomorphic fallacy, so our relationship with the killer machines be viewed as asymmetric, non-anthropomorphic, and non-zoomorphic.
Chapter Preview
Top

Research Methodology

This research is divided into two main parts. The first describes the evolution of AI technologies with emphasis on the military AI and Robotic technologies. This approach is chronological and also explanatory in relation with the main strategic agendas of various superpowers like USA, China and Russia. Through the analysis which is mostly at the level of politics and political science we can take a deep insight over their tensions and aspirations to lead the AI military race, to become rulers of the world and create an undefeatable status quo. It is also quite important, to highlight how this representation is mediated by mass media, literature and spectacle, creating sociotechnical imaginaries that are far from their real capacities and a real-politik that they can actually impose or support. The second part belongs to applied ethics and moral philosophy and has two sub divisions, the general ethical philosophy of AI and the second a particular type of ethics belonging to the Levinasian philosophy, promoting the ethics of radical asymmetry as applied to techno ethics of military AI.

Key Terms in this Chapter

Anthropomorphic fallacy: Central to human creativity is the projection of human qualities upon the external world. On the other hand, so subtle is the latter's mode of operation and so all-persuasive is human nature, that it persistently generates a mode of thought which, although fallacious, exerts a tremendous hold over us. Several examples - schemas, technologies, stories, evolutionary “ladder”, contingency - reveal the impact of the resulting anthropomorphic fallacy (Source: Coffey E.J. Journal of the British Interplanetary Society, Vol. 45, No. 1, p. 23 – 29).

Unmanned Autonomous Vehicles: Fully autonomous vehicles which do not require a driver at all (Source: Christoph Bartneck, Christoph Lütge, Alan Wagner, Sean Welsh, An Introduction to Ethics and AI , Charm: Springer, 2021 AU55: The in-text citation "Springer, 2021" is not in the reference list. Please correct the citation, add the reference to the list, or delete the citation. , p.40).

Applied Ethics/ Techno ethics: Technoethics is a term coined in 1974 by the Argentinian Canadian philosopher Mario Bunge to denote the special responsibilities of technologists and engineers to develop ethics as a branch of technology (Source: Technoethics | Encyclopedia.com, n.d.).

Autonomous Lethal Weapons: A weapon can be said to be “autonomous” in the “critical functions of targeting” if it can do one or more of following without a human operator. If the weapon can decide what classes of object it will engage, then it would be autonomous in terms of defining its targets. No current AWS has this capability. If a weapon can use sensors to select a target without a human operator, it can be said to have autonomy in the selection function of targeting. Many existing weapons can select targets without a human operator (Source: Christoph Bartneck, Christoph Lütge, Alan Wagner, Sean Welsh, An Introduction to Ethics and AI , Charm: Springer, 2021 AU52: The in-text citation "Springer, 2021" is not in the reference list. Please correct the citation, add the reference to the list, or delete the citation. , p. 94).

Ethics of Asymmetry: The meeting is based on a radical asymmetry; I am always more responsible than the Other. The face of the Other, as in a manner the I, occupies the position of the Other in his fragility and nudity of the face, is the responsibility for the Other, as “the impossibility for the other man, the impossibility of leaving him alone in the mystery of death”. This relation is based on love; this love is not love or eros in general, nor a reduction to altruism, the goodness of a generous nature, but it is rather an an-archic bond between the subject and the good that comes from the outside (Source: Emmanuel Levinas, “Humanism and An-Archy,” Revue Internatinal de la Philoso- phie , no. 85 (1968): 65-82).

Sociotechnical Imaginary: Collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology (Source: Sheila Jasanoff and San-Hyng Kim, Dreamscapes of Modernity Sociotechnical Imaginaries and the Fabrication of Power, (Danvers: University of Chicago Press, 2015 AU54: The in-text citation "University of Chicago Press, 2015" is not in the reference list. Please correct the citation, add the reference to the list, or delete the citation. ), p. 120).

Robot: Typically, an artificially intelligent agent is software that operates online or in a simulated world, often generating perceptions and/or acting within this artificial world. A robot, on the other hand, is situated in the real world, meaning that its existence and operation occur in the real world. Robots are also embodied, meaning that they have a physical body. The process of a robot making intelligent decisions is often described as “sense-plan-act” meaning that the robot must first sense the environment, plan what to do, and then act in the world (Source: Christoph Bartneck, Christoph Lütge, Alan Wagner, Sean Welsh, An Introduction to Ethics and AI , Charm: Springer, 2021 AU53: The in-text citation "Springer, 2021" is not in the reference list. Please correct the citation, add the reference to the list, or delete the citation. , p. 12.)

Complete Chapter List

Search this Book:
Reset