Responsibility and War Machines: Toward a Forward-Looking and Functional Account

Responsibility and War Machines: Toward a Forward-Looking and Functional Account

Jai Galliott
Copyright: © 2015 |Pages: 14
DOI: 10.4018/978-1-4666-8592-5.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The purpose of this chapter is to demonstrate that while unmanned systems certainly exacerbate some problems and cause us to rethink who we ought to hold morally responsible for military war crimes, traditional notions of responsibility are capable of dealing with the supposed ‘responsibility gap' in unmanned warfare and that more moderate regulation will perhaps prove more important than an outright ban. It begins by exploring the conditions under which responsibility is typically delegated to humans and how these responsibility requirements are challenged in technological warfare. Following this is an examination of Robert Sparrow's notion of a ‘responsibility gap' as it pertains to the deployment of fully autonomous weapons systems. It is argued that we can reach a solution by shifting to a forward-looking and functional sense of responsibility incorporating institutional agents and ensuring that the human role in engineering and unleashing these systems is never overlooked.
Chapter Preview
Top

Background: Challenges To Responsibility In Hi-Tech Warfare

Moral responsibility in war is about actions, omissions and their consequences. When we read stories in military ethics readers, those worthy of blame include agents failing to adhere to just war principles, or to otherwise do the ‘right thing’ as determined by platoon leaders, government or country. It is also about the conditions under which they did the right or wrong thing. To be held responsible, in accord with Fischer and Ravizza’s (1998) landmark account – the mechanism that issues the relevant behaviour must be the agent's own and be responsive to reasons – actors must not be ‘deceived or ignorant’ about what they are doing and ought to have control over their behaviour in a ‘suitable sense’ (Fischer & Ravizza 1998). Put more specifically, this means that an agent should only be considered morally responsible if they intentionally make a free and informed causal contribution to any act in question, meaning that they must be aware of the relevant facts and consequences of their actions, having arrived at the decision to act independently of coercion and were able to take alternative actions based on their knowledge of the facts. If these conditions are met, we can usually establish a link between the responsible subject and person or object affected, either retrospectively or prospectively (the latter will be the focus of the final section). However, technologically enabled warfare of the unmanned type presents various challenges for these standard accounts of moral responsibility. For the sake of a complete exposition and refutation of Sparrow’s claim that the responsibility gap presents an insurmountable threat, it is necessary to take a closer look at how semi-autonomous military technologies, generally defined, can complicate responsibility attribution in warfare.

Complete Chapter List

Search this Book:
Reset