The Functional Morality of Robots

The Functional Morality of Robots

Linda Johansson
DOI: 10.4018/978-1-4666-1773-5.ch020
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.
Chapter Preview
Top

The Moral Community

In order to decide whether we can hold robots morally responsible we should begin by considering when we hold humans morally responsible. On what criteria do we—or do we not— hold humans morally responsible?

Complete Chapter List

Search this Book:
Reset