Moral Emotions for Autonomous Agents

Moral Emotions for Autonomous Agents

Antoni Gomila (University Illes Balears, Spain) and Alberto Amengual (International Computer Science Institute, USA)
DOI: 10.4018/978-1-60566-354-8.ch010
OnDemand PDF Download:


In this chapter we raise some of the moral issues involved in the current development of robotic autonomous agents. Starting from the connection between autonomy and responsibility, we distinguish two sorts of problems: those having to do with guaranteeing that the behavior of the artificial cognitive system is going to fall within the area of the permissible, and those having to do with endowing such systems with whatever abilities are required for engaging in moral interaction. Only in the second case can we speak of full blown autonomy, or moral autonomy. We illustrate the first type of case with Arkin’s proposal of a hybrid architecture for control of military robots. As for the second kind of case, that of full-blown autonomy, we argue that a motivational component is needed, to ground the self-orientation and the pattern of appraisal required, and outline how such motivational component might give rise to interaction in terms of moral emotions. We end suggesting limits to a straightforward analogy between natural and artificial cognitive systems from this standpoint.
Chapter Preview

1. Introduction

The increasing success of Robotics in building autonomous agents, with rising levels of intelligence and sophistication, has taken away the nightmare of “the devil robot” from the hands of science fiction writers, and turned it into a real pressure for roboticists to design control systems able to guarantee that the behavior of such robots comply with minimal ethical requirements. Autonomy goes with responsibility, in a nutshell. Otherwise the designers risk having to be held themselves responsible for any wrong deeds of the autonomous systems. In a way, hence, predictability and reliability of artificial systems pull against its autonomy (flexibility, novelty in novel circumstances). The increase in autonomy rises high the issue of responsibility and, hence, the question of right and wrong, of moral reliability.

Which these minimal ethical requirements are may vary according to the kind of purpose these autonomous systems are build for. In the forthcoming years it is foreseable an increase in “service” robots: machines specially designed to deal with particularly risky or difficult tasks, in a flexible way. Thus, for instance, one of the leading areas of roboethical research concerns autonomous systems for military purposes; for such new systems, non-human supervision of use of lethal weapons may be a goal of the design, so that a guarantee must be clearly established that such robots will not kill innocent people, start firing combatants in surrender or attack fellow troops, before they are allowed to be turned on. In this area, the prescribed minimal requirements are those of the Laws of War made explicit in the Geneva Convention and the Rules of Engagement each army may establish for their troops. Other robots (for rescue, for fire intervention, for domestic tasks, for sexual intercourse) may also need to count on “moral” norms to constrain what to do in particular circumstances (“is it ok to let one person starve to feed other two?”). Much more so when we think of a middle range future and speculate about the possibility of really autonomous systems, or systems that “evolve” in the direction of higher autonomy: we really should start thinking about how to assure that such systems are going to respect our basic norms of humanity and social life, if they are to be autonomous in the fullest sense. So the question we want to focus on in this paper is: how should we deal with this particular challenge?

The usual way to deal with this challenge is a variation/extension of the existing deliberative/reactive autonomous robotic architectures, with the goal of providing the system with some kind of higher level control system, a reasoning moral system, based on moral principles and rules and some sort of inferential mechanism, to assess and judge the different situations in which the robot may enter, and act accordingly. The inspiration here is chess design: what’s required is a way to anticipate the consequences of one’s possible actions and of weighting those alternatives according to some sort of valuation algorithm, that excludes some of those possibilities from consideration altogether. Quite appart from the enormous difficulty of finding out which principles and rules can capture our “moral sense” in an explicit form, this project also faces the paradoxes and antinomies that lurk into any formal axiomatic system, well-known from the old days of Asimov’s laws. So to speak, this approach inherits the same sort of difficulties known as the “symbol grounding” and “frame” problems in Cognitive Science.

However, it might turn out that there is a better way to face the challenge: instead of conceiving of morality as a higher level of control based on a specific kind of reasoning, it could be conceived instead as an emotional level of control, along the current trend in the Social Neurosciences and Psychology which point in such direction (for an illustration, the special double issue in volume 7 of the journal Social Neuroscience). From this point of view, which in fact resumes the “moral sense” tradition in Ethics, moral judgement is not a business of reason and truth, but of emotion in the first place, not of analytical pondering of rights and wrongs, but of intuitive, fast, immediate affective valuation of a situation (which may be submitted to a more careful, detailed, reflexive, analysis later on), at least at the ground level. From this point of view, it might be a better option in order to build systems with some sort of “moral” understanding and compliance, to start building systems with a practical understanding of emotions and emotional interaction, in particular moral emotions. Rights and norms, so the story goes, come implicitly packed along these reactive attitudes, and thus get the power to motivate and mobilize characteristic of human morality.

Complete Chapter List

Search this Book:
Editorial Advisory Board
Table of Contents
Craig DeLancey
Jordi Vallverdú, David Casacuberta
Chapter 1
Oscar Deniz, Javier Lorenzo, Mario Hernández, Modesto Castrillón
Social intelligence seems to obviously require emotions. People have emotions, recognize them in others and also express them. A wealth of... Sample PDF
Emotional Modeling in an Interactive Robotic Head
Chapter 2
Cyril Laurier, Perfecto Herrera
Creating emotionally sensitive machines will significantly enhance the interaction between humans and machines. In this chapter we focus on enabling... Sample PDF
Automatic Detection of Emotion in Music: Interaction with Emotionally Sensitive Machines
Chapter 3
Christoph Bartneck, Michael J. Lyons
The human face plays a central role in most forms of natural human interaction so we may expect that computational methods for analysis of facial... Sample PDF
Facial Expression Analysis, Modeling and Synthesis: Overcoming the Limitations of Artificial Intelligence with the Art of the Soluble
Chapter 4
Sajal Chandra Banik, Keigo Watanabe, Maki K. Habib, Kiyotaka Izumi
Multi-robot team work is necessary for complex tasks which cannot be performed by a single robot. To get the required performance and reliability... Sample PDF
Multirobot Team Work with Benevolent Characters: The Roles of Emotions
Chapter 5
Matthias Scheutz, Paul Schermerhorn
Effective decision-making under real-world conditions can be very difficult as purely rational methods of decision-making are often not feasible or... Sample PDF
Affective Goal and Task Selection for Social Robots
Chapter 6
Christopher P. Lee-Johnson, Dale A. Carnegie
The hypothesis that artificial emotion-like mechanisms can improve the adaptive performance of robots and intelligent systems has gained... Sample PDF
Robotic Emotions: Navigation with Feeling
Chapter 7
C. Gros
All self-active living beings need to solve the motivational problem—the question of what to do at any moment of their life. For humans and... Sample PDF
Emotions, Diffusive Emotional Control and the Motivational Problem for Autonomous Cognitive Systems
Chapter 8
Bruce J. MacLennan
This chapter addresses the “Hard Problem” of consciousness in the context of robot emotions. The Hard Problem, as defined by Chalmers, refers to the... Sample PDF
Robots React, but Can They Feel?
Chapter 9
Mercedes García-Ordaz, Rocío Carrasco-Carrasco, Francisco José Martínez-López
It is contended here that the emotional elements and features of human reasoning should be taken into account when designing the personality of... Sample PDF
Personality and Emotions in Robotics from the Gender Perspective
Chapter 10
Antoni Gomila, Alberto Amengual
In this chapter we raise some of the moral issues involved in the current development of robotic autonomous agents. Starting from the connection... Sample PDF
Moral Emotions for Autonomous Agents
Chapter 11
Pietro Cipresso, Jean-Marie Dembele, Marco Villamira
In this work, we present an analytical model of hyper-inflated economies and develop a computational model that permits us to consider expectations... Sample PDF
An Emotional Perspective for Agent-Based Computational Economics
Chapter 12
Michel Aubé
The Commitment Theory of Emotions is issued from a careful scrutiny of emotional behavior in humans and animals, as reported in the literature on... Sample PDF
Unfolding Commitments Management: A Systemic View of Emotions
Chapter 13
Sigerist J. Rodríguez, Pilar Herrero, Olinto J. Rodríguez
Today, realism and coherence are highly searched qualities in agent’s behavior; but these qualities cannot be achieved completely without... Sample PDF
A Cognitive Appraisal Based Approach for Emotional Representation
Chapter 14
Clément Raïevsky, François Michaud
Emotion plays several important roles in the cognition of human beings and other life forms, and is therefore a legitimate inspiration for providing... Sample PDF
Emotion Generation Based on a Mismatch Theory of Emotions for Situated Agents
Chapter 15
Artificial Surprise  (pages 267-291)
Luis Macedo, Amilcar Cardoso, Rainer Reisenzein, Emiliano Lorini
This chapter reviews research on computational models of surprise. Part 1 begins with a description of the phenomenon of surprise in humans, reviews... Sample PDF
Artificial Surprise
Chapter 16
Tom Adi
A new theory of emotions is derived from the semantics of the language of emotions. The sound structures of 36 Old Arabic word roots that express... Sample PDF
A Theory of Emotions Based on Natural Language Semantics
Chapter 17
Huma Shah, Kevin Warwick
The Turing Test, originally configured as a game for a human to distinguish between an unseen and unheard man and woman, through a text-based... Sample PDF
Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes
Chapter 18
Félix Francisco Ramos Corchado, Héctor Rafael Orozco Aguirre, Luis Alfonso Razo Ruvalcaba
Emotions play an essential role in the cognitive processes of an avatar and are a crucial element for modeling its perception, learning, decision... Sample PDF
Artificial Emotional Intelligence in Virtual Creatures
Chapter 19
Sarantos I. Psycharis
In our study we collected data with respect to cognitive variables (learning outcome), metacognitive indicators (knowledge about cognition and... Sample PDF
Physics and Cognitive-Emotional-Metacognitive Variables: Learning Performance in the Environment of CTAT
Chapter 20
Anthony G. Francis Jr., Manish Mehta, Ashwin Ram
Believable agents designed for long-term interaction with human users need to adapt to them in a way which appears emotionally plausible while... Sample PDF
Emotional Memory and Adaptive Personalities
Chapter 21
Dorel Gorga, Daniel K. Schneider
The purpose of this contribution is to discuss conceptual issues and challenges related to the integration of emotional agents in the design of... Sample PDF
Computer-Based Learning Environments with Emotional Agents
Chapter 22
Emotional Ambient Media  (pages 443-459)
Artur Lugmayr, Tillmann Dorsch, Pabo Roman Humanes
The “medium is the message”: nowadays the medium as such is non-distinguishable from its presentation environment. However, what is the medium in an... Sample PDF
Emotional Ambient Media
Chapter 23
Jordi Vallverdú, David Casacuberta
During the previous stage of our research we developed a computer simulation (called ‘The Panic Room’ or, more simply, ‘TPR’) dealing with synthetic... Sample PDF
Modelling Hardwired Synthetic Emotions: TPR 2.0
Chapter 24
Cecile K.M. Crutzen, Hans-Werner Hein
A vision of future daily life is explored in Ambient Intelligence (AmI). It follows the assumption that information technology should disappear into... Sample PDF
Invisibility and Visibility: The Shadows of Artificial Intelligence
About the Contributors