Artificial Surprise

Artificial Surprise

Luis Macedo (University of Coimbra, Portugal), Amilcar Cardoso (University of Coimbra, Portugal), Rainer Reisenzein (University of Greifswald, Germany) and Emiliano Lorini (Institute of Cognitive Sciences and Technologies, Italy & Institut de Recherche en Informatique de)
DOI: 10.4018/978-1-60566-354-8.ch015
OnDemand PDF Download:


This chapter reviews research on computational models of surprise. Part 1 begins with a description of the phenomenon of surprise in humans, reviews research on human surprise, and describes a psychological model of surprise (Meyer, Reisenzein, & Schützwohl, 1997). Part 2 is devoted to computational models of surprise, giving special prominence to the models proposed by Macedo and Cardoso (e.g., Macedo & Cardoso, 2001b) and by Lorini and Castelfranchi (e.g., Lorini & Castelfranchi, 2007). Part 3 compares the two models of artificial surprise with each other and with the Meyer et al. model of human surprise, discusses possible targets of future research, and considers possible practical applications.
Chapter Preview


Considered by some theorists a biologically basic emotion (e.g., Izard, 1991), surprise has long been of interest to philosophers and psychologists. In contrast, the artificial intelligence and computational modeling communities have until recently largely ignored surprise (for an exception, see Ortony & Partridge, 1987). However, during the last years, several computational models of surprise, including concrete computer implementations, have been developed. The aim of these computational models of surprise—which are in part based on psychological theories and findings on the subject—is on the one hand to simulate surprise in order to advance the understanding of surprise in humans, and on the other hand to provide artificial agents (softbots or robots) with the benefits of a surprise mechanism. This second goal is motivated by the belief that surprise is as relevant for artificial agents as it is for humans. Ortony and Partridge (1987, p. 108), proposed that a surprise mechanism is “a crucial component of general intelligence”. Similarly, we propose that a surprise mechanism is an essential component of any anticipatory agent that, like humans, is resource-bounded and operates in an imperfectly known and changing environment. The function of the surprise mechanism in such an agent is the same as in humans: To promote the short- and long-term adaptation to unexpected events (e.g., Meyer et al., 1997). As will be seen, this function of surprise entails a close connection of surprise to curiosity and exploration (Berlyne, 1960), as well as to belief revision and learning (e.g., Charlesworth, 1969). Beyond that, surprise has been implicated as an essential element in creativity, aesthetic experience, and humor (e.g., Boden, 1995; Huron, 2006; Schmidhuber, 2006; Suls, 1971). Surprise is therefore also of importance to artificial intelligence researchers interested in the latter phenomena (Macedo & Cardoso, 2001a, 2002; Ritchie, 1999).

The chapter comprises three sections. Section 1 reviews psychological research on surprise. After a brief historical survey, the theory of surprise proposed by Meyer et al. (1997) is described in some detail. Section 2 is devoted to computational models of surprise, giving special prominence to the models of Macedo and Cardoso (e.g., Macedo & Cardoso, 2001b; Macedo et al., 2004) and Lorini and Castelfranchi (e.g., Lorini & Castelfranchi, 2007). Section 3 compares the two models of artificial surprise with each other and with the Meyer et al. (1997) model of human surprise, discusses possible targets of future research, and considers possible practical applications.

Key Terms in this Chapter

Emotions: In humans: mental states subjectively experienced as (typically) positive or negative feelings that are usually directed toward a specific object, and more or less frequently accompanied by physiological arousal, expressive reactions, or emotional behaviors. Typical examples are joy, sadness, fear, hope, anger, pity, pride, and envy. In artificial agents: corresponding processing states intended to simulate emotions of natural agents, usually humans. Note that depending on context, ‘emotion’ may also refer to the mechanism that produces emotions rather than to its products.

Surprise: In humans: a peculiar state of mind caused by unexpected events, or proximally the detection of a contradiction or conflict between newly acquired and pre-existing beliefs. In artificial agents: a corresponding processing state caused by the detection of a contradiction between input information and pre-existing information. Note that depending on context, “surprise” may also refer to the mechanism that produces surprise, rather than to its product.

Belief: In humans: a mental state (propositional attitude) in which a person holds a particular proposition p to be true. In artificial agents: a corresponding functional (processing) state.

Mismatch: Discrepancy or conflict between objects, in particular a contradiction between propositions or beliefs.

Affective: Colloquially: concerned with or arousing feelings or emotions; emotional. In today’s psychology, “affective” is often used as a cover term for all emotional and related phenomena (emotions, moods, evaluations...).

Disappointment: The unpleasant feeling resulting from an expectation failure concerning a desired event, or put alternatively, the disconfirmation of the belief that the desired event would occur.

Conflict(s): See “mismatch.”

Computational Model(s): A computational model is a computer program that attempts to simulate a particular natural system or subsystem.

Misexpected: A proposition p is misexpected for an agent A if p is detected by A (or a subsystem of A) to conflict with, or to mismatch, a pre-existing, specific and usually explicit belief of A regarding p. In contrast, p is unexpected for A in the narrow sense of the word if p is detected by A to be inconsistent with A’s background beliefs. Finally, p is unexpected for A in the wide sense of the term if p is either misexpected for A, or unexpected in the narrow sense.

Agent(s): An autonomous entity capable of action.

Astonishment: A subform of surprise distinguished from regular surprise, according to different authors, by higher intensity, longer duration, or special causes (e.g., fully unexpected events [astonishment] in contrast to misexpected events [ordinary surprise]).

Artificial Surprise: Surprise synthetized in machines (artificial agents), usually intended as a simulation of surprise in natural agents, specifically humans. Depending on context, “surprise” may either refer to the mechanism that produces surprise, or to its product, the surprise generated.

Unexpected: A proposition p is unexpected for an agent A if p was explicitly or implicitly considered unlikely or improbable to be true by A, but is now regarded as true by A.

Anticipation: In humans, “anticipation” refers to the mental act or process of “looking forward” by means of forming predictions or beliefs about the future. An anticipatory agent is a natural or artificial agent who makes decisions based on predictions, expectations, or beliefs about the future.

Expectation: In common parlance, an expectation is a belief regarding a future state of affairs. In the literature on surprise, “expectation” is frequently used synonymously with “belief”.

Complete Chapter List

Search this Book:
Editorial Advisory Board
Table of Contents
Craig DeLancey
Jordi Vallverdú, David Casacuberta
Chapter 1
Oscar Deniz, Javier Lorenzo, Mario Hernández, Modesto Castrillón
Social intelligence seems to obviously require emotions. People have emotions, recognize them in others and also express them. A wealth of... Sample PDF
Emotional Modeling in an Interactive Robotic Head
Chapter 2
Cyril Laurier, Perfecto Herrera
Creating emotionally sensitive machines will significantly enhance the interaction between humans and machines. In this chapter we focus on enabling... Sample PDF
Automatic Detection of Emotion in Music: Interaction with Emotionally Sensitive Machines
Chapter 3
Christoph Bartneck, Michael J. Lyons
The human face plays a central role in most forms of natural human interaction so we may expect that computational methods for analysis of facial... Sample PDF
Facial Expression Analysis, Modeling and Synthesis: Overcoming the Limitations of Artificial Intelligence with the Art of the Soluble
Chapter 4
Sajal Chandra Banik, Keigo Watanabe, Maki K. Habib, Kiyotaka Izumi
Multi-robot team work is necessary for complex tasks which cannot be performed by a single robot. To get the required performance and reliability... Sample PDF
Multirobot Team Work with Benevolent Characters: The Roles of Emotions
Chapter 5
Matthias Scheutz, Paul Schermerhorn
Effective decision-making under real-world conditions can be very difficult as purely rational methods of decision-making are often not feasible or... Sample PDF
Affective Goal and Task Selection for Social Robots
Chapter 6
Christopher P. Lee-Johnson, Dale A. Carnegie
The hypothesis that artificial emotion-like mechanisms can improve the adaptive performance of robots and intelligent systems has gained... Sample PDF
Robotic Emotions: Navigation with Feeling
Chapter 7
C. Gros
All self-active living beings need to solve the motivational problem—the question of what to do at any moment of their life. For humans and... Sample PDF
Emotions, Diffusive Emotional Control and the Motivational Problem for Autonomous Cognitive Systems
Chapter 8
Bruce J. MacLennan
This chapter addresses the “Hard Problem” of consciousness in the context of robot emotions. The Hard Problem, as defined by Chalmers, refers to the... Sample PDF
Robots React, but Can They Feel?
Chapter 9
Mercedes García-Ordaz, Rocío Carrasco-Carrasco, Francisco José Martínez-López
It is contended here that the emotional elements and features of human reasoning should be taken into account when designing the personality of... Sample PDF
Personality and Emotions in Robotics from the Gender Perspective
Chapter 10
Antoni Gomila, Alberto Amengual
In this chapter we raise some of the moral issues involved in the current development of robotic autonomous agents. Starting from the connection... Sample PDF
Moral Emotions for Autonomous Agents
Chapter 11
Pietro Cipresso, Jean-Marie Dembele, Marco Villamira
In this work, we present an analytical model of hyper-inflated economies and develop a computational model that permits us to consider expectations... Sample PDF
An Emotional Perspective for Agent-Based Computational Economics
Chapter 12
Michel Aubé
The Commitment Theory of Emotions is issued from a careful scrutiny of emotional behavior in humans and animals, as reported in the literature on... Sample PDF
Unfolding Commitments Management: A Systemic View of Emotions
Chapter 13
Sigerist J. Rodríguez, Pilar Herrero, Olinto J. Rodríguez
Today, realism and coherence are highly searched qualities in agent’s behavior; but these qualities cannot be achieved completely without... Sample PDF
A Cognitive Appraisal Based Approach for Emotional Representation
Chapter 14
Clément Raïevsky, François Michaud
Emotion plays several important roles in the cognition of human beings and other life forms, and is therefore a legitimate inspiration for providing... Sample PDF
Emotion Generation Based on a Mismatch Theory of Emotions for Situated Agents
Chapter 15
Artificial Surprise  (pages 267-291)
Luis Macedo, Amilcar Cardoso, Rainer Reisenzein, Emiliano Lorini
This chapter reviews research on computational models of surprise. Part 1 begins with a description of the phenomenon of surprise in humans, reviews... Sample PDF
Artificial Surprise
Chapter 16
Tom Adi
A new theory of emotions is derived from the semantics of the language of emotions. The sound structures of 36 Old Arabic word roots that express... Sample PDF
A Theory of Emotions Based on Natural Language Semantics
Chapter 17
Huma Shah, Kevin Warwick
The Turing Test, originally configured as a game for a human to distinguish between an unseen and unheard man and woman, through a text-based... Sample PDF
Emotion in the Turing Test: A Downward Trend for Machines in Recent Loebner Prizes
Chapter 18
Félix Francisco Ramos Corchado, Héctor Rafael Orozco Aguirre, Luis Alfonso Razo Ruvalcaba
Emotions play an essential role in the cognitive processes of an avatar and are a crucial element for modeling its perception, learning, decision... Sample PDF
Artificial Emotional Intelligence in Virtual Creatures
Chapter 19
Sarantos I. Psycharis
In our study we collected data with respect to cognitive variables (learning outcome), metacognitive indicators (knowledge about cognition and... Sample PDF
Physics and Cognitive-Emotional-Metacognitive Variables: Learning Performance in the Environment of CTAT
Chapter 20
Anthony G. Francis Jr., Manish Mehta, Ashwin Ram
Believable agents designed for long-term interaction with human users need to adapt to them in a way which appears emotionally plausible while... Sample PDF
Emotional Memory and Adaptive Personalities
Chapter 21
Dorel Gorga, Daniel K. Schneider
The purpose of this contribution is to discuss conceptual issues and challenges related to the integration of emotional agents in the design of... Sample PDF
Computer-Based Learning Environments with Emotional Agents
Chapter 22
Emotional Ambient Media  (pages 443-459)
Artur Lugmayr, Tillmann Dorsch, Pabo Roman Humanes
The “medium is the message”: nowadays the medium as such is non-distinguishable from its presentation environment. However, what is the medium in an... Sample PDF
Emotional Ambient Media
Chapter 23
Jordi Vallverdú, David Casacuberta
During the previous stage of our research we developed a computer simulation (called ‘The Panic Room’ or, more simply, ‘TPR’) dealing with synthetic... Sample PDF
Modelling Hardwired Synthetic Emotions: TPR 2.0
Chapter 24
Cecile K.M. Crutzen, Hans-Werner Hein
A vision of future daily life is explored in Ambient Intelligence (AmI). It follows the assumption that information technology should disappear into... Sample PDF
Invisibility and Visibility: The Shadows of Artificial Intelligence
About the Contributors