From Computational Emotional Models to HRI

From Computational Emotional Models to HRI

J. Vallverdú (Department of Philosophy, Autonomous University of Barcelona (UAB), Cerdanyola del Vallès, Spain), D. Casacuberta (Department of Philosophy, Autonomous University of Barcelona (UAB), Cerdanyola del Vallès, Spain), T. Nishida (Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan), Y. Ohmoto (Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan), S. Moran (Mixed Reality Lab, School of Computer Science, University of Nottingham, Nottingham, UK) and S. Lázare (Department of Anthropology, Autonomous University of Barcelona (UAB), Cerdanyola del Vallès, Spain)
Copyright: © 2013 |Pages: 15
DOI: 10.4018/ijrat.2013070102
OnDemand PDF Download:
List Price: $37.50
10% Discount:-$3.75


During the previous stage of our research we developed a computer simulation (called ‘The Panic Room’ or, more simply, ‘TPR’) dealing with synthetic emotions. The authors were developing the first steps towards an evolutionary machine, defining the key elements involved in the development of complex actions (that is, creating a physical intuitive ontology, from a bottom-up approach). After the successful initial results of TPR, the authors considered that it would be necessary to develop a new simulation (which the authors will call “TPR 2.0.”), more complex and with better visualisation characteristics. After this, the authors created a simulation on emotions evolution with genetic algorithms (Game Of Emotions, GOE) which results on the value of specific emotions into social domains were applied to HRI real robotic environments at Nishidalab (Japan), focused into the notions of empathy and proxemics. There the authors performed an experiment that involved humans from two different native-speaking cultures and one robot introduced as three different machines. The final HRI obtained data was analyzed under several research field perspectives: psychology, philosophy, robotic sciences and anthropology.
Article Preview

1. Introduction

After working on ambient intelligence and emotions (The Panic Room 1.0, creating a simulation of an ambient intelligence device which could display some sort of protoemotion adapted to solve a very simple task), we updated and improved that research with a computer simulation called TPR 2.0. After this we designed a new simulation called GOE (Game of Emotions) based on genetic algorithms to understand the mechanisms of the emergence of complex emotions from basic protoemotions. Finally, we applied some of the results on emotions and moods to implement some obtained ideas into real HRI (Human-Robot Interaction) environments.

1.1. Bottom Up Approach

AI and robotics have tried intensively to develop intelligent machines over the last 50 years. Meanwhile, two different approaches to research into AI have appeared, which we can summarise as top down and bottom up approaches:

  • 1.

    Top Down: symbol system hypothesis (Douglas Lenat, Herbert Simon). The top down approach constitutes the classical model. It works with symbol systems, which represent entities in the world. A reasoning engine operates in a domain independent way on the symbols. SHRDLU (Winograd), Cyc (Douglas Lenat) or expert systems are examples of it.

  • 2.

    Bottom Up: physical grounding hypothesis (situated activity, situated embodiment, connectionism ← veritat? No sería connectionism?). On the other hand, the bottom up approach (led by Rodney Brooks), is based on the physical grounding hypothesis. Here, the system is connected to the world via a set of sensors and the engine extracts all its knowledge from these physical sensors. Brooks talks about “intelligence without representation”: complex intelligent systems will emerge as a result of (or o of?) complex interactive and independent machines. (Vallverdú, 2006)

Although we consider that the top-down approach was really successful on several levels (cf. excellent expert systems like the chess master Deep Blue), we consider that the approaches to emotions made from this perspective cannot embrace or reproduce the nature of an emotion. Like Brooks (1991), we consider that intelligence is an emergent property of systems and that in that process, emotions play a fundamental role (Sloman & Croucher, 1981; DeLancey, 2001). In order to achieve an 'artificial self' we must not only develop the intelligent characteristics of human beings but also their emotional disposition towards the world. We put the artificial mind back into its (evolutionary) artificial nature.

1.2. Protoemotions and Action

As we have published extensively elsewhere (Casacuberta 2000, 2004; Vallverdú 2007), on how emotions play a fundamental role in rational processes and the development of complex behaviour, including decision making (Schwarz 2000). There is a huge body of literature on these ideas which we will not analyse here but which can be consulted (Damasio 1994, Edelman 2000, Denton 2006, Ramachandran 2004).

After describing emotions as alarm systems that activate specific responses (Vallverdú & Casacuberta, 2008), we considered it necessary to minimise the number of basic emotions and choose two: pain and pleasure, considered as negative and positive inputs, respectively. We called them protoemotions, because they are the two basic regulators of activity. In this sense, we considered synthetic emotions as “an independently embedded (or hard-wired) self-regulating system that reacts to the diverse inputs that the system can collect (internal or external).” (op. cit, 105). From this point of view the cybernetics concept of feedback, as a property of biological entities is added to our conceptual model.

We must also take into account another use of this term, protoemotions, by clinicians who characterise emotions of psychopaths using this term to referr to their “primitive responses to immediate needs” (Pitchford, 2001). Our idea of protoemotions has no relation at all to psychopaths, but to the idea of basic emotions: these emotions which are at the bottom of the complex and subtle pyramid of emotional activity (such as anger, fear, sadness,.......). Following the ideas of Wolfram (2002/3) we agree with the idea that “many very simple programs produce great complexity” and that “there is never an immediate reason to go beyond studying systems with rather simple underlying rules” (op.cit, 110).

Complete Article List

Search this Journal:
Open Access Articles
Volume 6: 2 Issues (2018)
Volume 5: 2 Issues (2017)
Volume 4: 2 Issues (2016)
Volume 3: 2 Issues (2015)
Volume 2: 2 Issues (2014)
Volume 1: 2 Issues (2013)
View Complete Journal Contents Listing