Humans, Autonomous Systems, and Killing in War

Humans, Autonomous Systems, and Killing in War

Jai Galliott (The University of New South Wales, Australia)
Copyright: © 2018 |Pages: 18
DOI: 10.4018/978-1-5225-5094-5.ch008


Technology has always allowed agents of war to separate themselves from the harm that they or their armed forces inflict with spears, bows and arrows, trebuchets, cannons, firearms, and other modern weaponry, all serving as examples of technologies that have increased the distance between belligerents and supposedly made warfare less sickening than the close-quarters combat of the past. This chapter calls into question the claims of some proponents of a ban moratorium on lethal autonomous weapons systems regarding a responsibility gap and contends that most implications associated with the introduction of autonomous technologies can be resolved by recognizing that autonomy does not mean the elimination of a human influence on the battlefield and advocates for a black-box-type recorder to ensure compliance with just war theory and the laws of war.
Chapter Preview


Despite strong opposition from various quarters, it remains an open question as to whether increasing levels of autonomy and distancing in weapon systems will have any significant effect on states’ or individuals’ ability to meet ethical and legal obligations. As neither the law of armed conflict nor just war theory refers specifically to levels of autonomy systems or indeed the impact of technologically facilitated spatial-moral distancing, the obligations which states currently bear in relation to the use of those systems with a degree of autonomy are those which apply to use of any weapon system that is not, by its nature, illegal or immoral. There are, of course, weapon systems in use today that arguably conform to the commonly accepted definitions of ‘autonomous’, such as the defensive and offensive close-in-weapon systems commonly installed on naval warships for detecting and destroying missiles and aircraft, as well as similar systems which are all capable of identifying targets of interest and firing without needing human input at the point of action execution. Not only are such weapon systems in use today, there are no serious claims that the use of such systems in armed conflict is intrinsically illegal or immoral. This chapter focuses on the trail of humanity tied to the development of distancing weapons and argues that more advanced systems, capable of complex behaviour in less predictable environments, while perhaps morally problematic in that they facilitate moral distancing and disengagement, would not reach the threshold beyond which our existing normative instruments and frameworks cannot adequately account for their use. The fundamental tendency obscuring the capacity of the relevant instruments to deal with the challenge of what are, in fact, little more than semi-autonomous weapons, is that through which stakeholders attribute blame to technology, rather than the people deciding how to use the technology. Such arguments are often labelled ‘too reductive’, but such claims are argued to be, at best, counterproductive, and, at worst, nonsensical. The actual task, it is argued and partly addressed here, is to identify the places in which compliance with the existing frameworks will be challenged as levels of autonomy on the battlefield increase. The main responsibility in this regard, it is suggested, is to focus on those areas throughout the lethal autonomous weapon systems product life cycle in which direct human-interaction takes places and to record said interaction.

Complete Chapter List

Search this Book: