Could Robots Feel Pain?: How Can We Know?

Could Robots Feel Pain?: How Can We Know?

Bruce MacLennan
Copyright: © 2018 |Pages: 25
DOI: 10.4018/978-1-5225-2973-6.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter considers the question of whether a robot could feel pain or experience other emotions and proposes empirical methods for answering this question. After a review of the biological functions of emotion and pain, the author argues that autonomous robots have similar functions that need to be fulfilled, which require systems analogous to emotion and pain. Protophenomenal analysis, which involves parallel reductions in the phenomenological and neurological domains, is explained and applied to the “hard problem” of robot emotion and pain. The author outlines empirical approaches to answering the fundamental questions on which depends the possibility of robot consciousness in general. The author then explains the importance of sensors distributed throughout a robot's body for the emergence of coherent emotional phenomena in its awareness. Overall, the chapter elucidates the issue of robot pain and emotion and outlines an approach to resolving it empirically.
Chapter Preview
Top

Introduction

As a society, we are already integrating machines into our bodies, and our ability to do this successfully grows daily, with true cyborgs as a seemingly inevitable end point. Moreover, we are progressing slowly but steadily toward autonomous robots whose behavior will give evidence of sentience. While it may be some time before such robots have intelligence comparable to ours, their eventual existence will raise ethical issues. Moreover, we will face these issues even before robots reach the level of human intelligence. As we have ethical standards for the treatment of laboratory animals, such as rats, so we will face ethical dilemmas in the treatment of autonomous robots with comparable mental capacities. Our notions of cruelty and ethical treatment of other beings depend to a large degree on their capacity to feel pain and to suffer in other ways: to feel fear, distress, anxiety, anguish, sorrow, loneliness, and loss. But we suppose there is a fundamental difference between actually experiencing these things (as we do) and acting as though we are experiencing them (as most people suppose machines to do). Even in the case of cyborgs, we would like a scientific basis for predicting the effects on an animal’s sentience resulting from integration with artificial devices. These developments demand that we give our attention to long standing issues in the relation of mind and matter, that we move them from philosophical quandaries to practical ethical and ultimately legal issues. It is time to take them seriously and to develop methods to answer them reliably. After we understand robot sentience better, we will be in a position to address the ethical issues, but they are beyond the scope here.

With a current focus on robot consciousness, there are several ways the word “consciousness” can be used in philosophy (e.g., Block, 1995). One sense has been termed functional or access consciousness, which includes self-consciousness — having an internal representation of the self about which one can reason — and monitoring consciousness — internal scanning and higher-order representation of mental state (Gutenplan, 1994, pp. 213–216). Of similar character is the passive frame theory of Morsella, Godwin, Jantz, Krieger, and Gazzaley (2015), which explains consciousness as “a frame that constrains and directs skeletal muscle output,” a sort of clearinghouse between proposed actions and (unconscious) action deciders. These are important ideas with relevance to autonomous robots. In this chapter, however, the focus is on phenomenal consciousness; that is, the subjective experience of being a sentient being, of being aware, of feeling as opposed to reacting. In particular, the issue of whether a robot could feel pain or experience other emotions is unpacked.

The principal problem of (phenomenal) consciousness is “to understand the relation between our subjective awareness and the brain processes that cause it” (MacLennan, 1995). This is commonly known as the hard problem of consciousness:

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. … It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does. (Chalmers, 1995)

More succinctly, “The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious” (Weisberg, 2012). This chapter addresses the hard problem in the context of robot pain and emotion. It argues that a robot can be programmed to exhibit emotional behavior (and that this is a useful thing to do), but this raises the question of whether such a robot would feel its emotions (MacLennan, 2014). In particular, could it feel pain? Is this possible? Under what conditions? Our goal is to approach these questions scientifically; that is, by means of hypotheses that can, in principle, be either confirmed or refuted empirically.

Complete Chapter List

Search this Book:
Reset