Lethal Military Robots: Who Is Responsible When Things Go Wrong?

Lethal Military Robots: Who Is Responsible When Things Go Wrong?

Lambèr Royakkers, Peter Olsthoorn
Copyright: © 2018 |Pages: 18
DOI: 10.4018/978-1-5225-5094-5.ch006
(Individual Chapters)
No Current Special Offers


Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some serious ethical questions. One of the most pressing concerns the moral responsibility in case a military robot uses violence in a way that would normally qualify as a war crime. In this chapter, the authors critically assess the chain of responsibility with respect to the deployment of both semi-autonomous and (learning) autonomous lethal military robots. They start by looking at military commanders because they are the ones with whom responsibility normally lies. The authors argue that this is typically still the case when lethal robots kill wrongly – even if these robots act autonomously. Nonetheless, they next look into the possible moral responsibility of the actors at the beginning and the end of the causal chain: those who design and manufacture armed military robots, and those who, far from the battlefield, remotely control them.
Chapter Preview


Although the use of unmanned systems is still in its infancy in most armed forces, some militaries, especially those of the US and Israel, have developed and deployed highly advanced drones. Even though the majority of these unmanned systems used in operations today are unarmed and mainly used for reconnaissance and mine clearing, the increase of the number of armed military robots, especially airborne ones, is undeniable. Certainly, on the face of it, unmanned systems have some strong benefits that could reduce the number of ‘unfortunate incidents’ on the battlefield. To start with, the main causes of misconduct on the battlefield: frustration, boredom, and anger are diminished.1 What’s more, these unmanned systems have no instinct of self-preservation, and are able to hold their fire in critical situations. On the other hand, the use of robots raises some serious ethical questions. For instance, under what circumstances, and to what extent, do we allow robots to act autonomously? What precautions should (and can) we take to prevent robots from running amok? Would the use of military robots not be counterproductive to winning the hearts and minds of occupied populations, or result in more desperate terrorist-tactics given an increasing asymmetry in warfare? (See for an overview Lin, Bekey, and Abney, 2008; Lichocki, Kahn & Billard. 2011; Olsthoorn & Royakkers 2011; Schwarz 2017). A particularly pressing question is what to do when things go wrong: who, if anyone, can be held morally accountable in reason for an act of violence that a) involves a military robot; and b) would normally be described as a war crime?

The answer to that latter question depends on the answer to a prior one: when is there reasonable ground to hold an agent morally responsible for a certain outcome in the first place? Following Fischer & Ravizza (1998) on moral responsibility, we will assume here that agents can only in reason be held responsible if they are moral agents, that is, persons (or organizations) who have control over their behavior and the resulting consequences. This means that agents can be held responsible for a certain decision only insofar as they have been able to make it in freedom and knowingly. The first term means that it is not reasonable to hold agents responsible for actions or their consequences if they were not coerced or under duress. The second term, ‘knowingly,’ has an important normative aspect in that it relates to what people should know, or can with reason be expected to know, with respect to the relevant facts surrounding their decision or action.2

According to some authors (Asaro, 2007; Sparrow, 2007; Sharkey, 2008), the use of armed military robots makes the attribution of responsibility problematic, as it is not sufficiently clear who can be held responsible for civilian casualties and other collateral damage that result from the use of military robots, whether by mechanical error or failing judgment. Is it the designer/programmer, the field commander, the robot manufacturer, the robot controller/supervisor, or the nation that commissioned the robot? The answer to that question depends on a number of factors. For instance, was the cause a programming error, a malfunctioning, an accident, or intentional misuse? Or did the procedure include a ‘man-in-the-loop,’ that is, an element of human control, or was the military robot a fully autonomous or even learning machine?

Complete Chapter List

Search this Book: