Artificial Surprise

Artificial Surprise

Luis Macedo (University of Coimbra, Portugal), Amilcar Cardoso (University of Coimbra, Portugal), Rainer Reisenzein (University of Greifswald, Germany) and Emiliano Lorini (Institute of Cognitive Sciences and Technologies, Italy & Institut de Recherche en Informatique de Toulouse, France)
DOI: 10.4018/978-1-60566-354-8.ch015
OnDemand PDF Download:
$37.50

Abstract

This chapter reviews research on computational models of surprise. Part 1 begins with a description of the phenomenon of surprise in humans, reviews research on human surprise, and describes a psychological model of surprise (Meyer, Reisenzein, & Schützwohl, 1997). Part 2 is devoted to computational models of surprise, giving special prominence to the models proposed by Macedo and Cardoso (e.g., Macedo & Cardoso, 2001b) and by Lorini and Castelfranchi (e.g., Lorini & Castelfranchi, 2007). Part 3 compares the two models of artificial surprise with each other and with the Meyer et al. model of human surprise, discusses possible targets of future research, and considers possible practical applications.
Chapter Preview
Top

Introduction

Considered by some theorists a biologically basic emotion (e.g., Izard, 1991), surprise has long been of interest to philosophers and psychologists. In contrast, the artificial intelligence and computational modeling communities have until recently largely ignored surprise (for an exception, see Ortony & Partridge, 1987). However, during the last years, several computational models of surprise, including concrete computer implementations, have been developed. The aim of these computational models of surprise—which are in part based on psychological theories and findings on the subject—is on the one hand to simulate surprise in order to advance the understanding of surprise in humans, and on the other hand to provide artificial agents (softbots or robots) with the benefits of a surprise mechanism. This second goal is motivated by the belief that surprise is as relevant for artificial agents as it is for humans. Ortony and Partridge (1987, p. 108), proposed that a surprise mechanism is “a crucial component of general intelligence”. Similarly, we propose that a surprise mechanism is an essential component of any anticipatory agent that, like humans, is resource-bounded and operates in an imperfectly known and changing environment. The function of the surprise mechanism in such an agent is the same as in humans: To promote the short- and long-term adaptation to unexpected events (e.g., Meyer et al., 1997). As will be seen, this function of surprise entails a close connection of surprise to curiosity and exploration (Berlyne, 1960), as well as to belief revision and learning (e.g., Charlesworth, 1969). Beyond that, surprise has been implicated as an essential element in creativity, aesthetic experience, and humor (e.g., Boden, 1995; Huron, 2006; Schmidhuber, 2006; Suls, 1971). Surprise is therefore also of importance to artificial intelligence researchers interested in the latter phenomena (Macedo & Cardoso, 2001a, 2002; Ritchie, 1999).

The chapter comprises three sections. Section 1 reviews psychological research on surprise. After a brief historical survey, the theory of surprise proposed by Meyer et al. (1997) is described in some detail. Section 2 is devoted to computational models of surprise, giving special prominence to the models of Macedo and Cardoso (e.g., Macedo & Cardoso, 2001b; Macedo et al., 2004) and Lorini and Castelfranchi (e.g., Lorini & Castelfranchi, 2007). Section 3 compares the two models of artificial surprise with each other and with the Meyer et al. (1997) model of human surprise, discusses possible targets of future research, and considers possible practical applications.

Key Terms in this Chapter

Emotions: In humans: mental states subjectively experienced as (typically) positive or negative feelings that are usually directed toward a specific object, and more or less frequently accompanied by physiological arousal, expressive reactions, or emotional behaviors. Typical examples are joy, sadness, fear, hope, anger, pity, pride, and envy. In artificial agents: corresponding processing states intended to simulate emotions of natural agents, usually humans. Note that depending on context, ‘emotion’ may also refer to the mechanism that produces emotions rather than to its products.

Surprise: In humans: a peculiar state of mind caused by unexpected events, or proximally the detection of a contradiction or conflict between newly acquired and pre-existing beliefs. In artificial agents: a corresponding processing state caused by the detection of a contradiction between input information and pre-existing information. Note that depending on context, “surprise” may also refer to the mechanism that produces surprise, rather than to its product.

Belief: In humans: a mental state (propositional attitude) in which a person holds a particular proposition p to be true. In artificial agents: a corresponding functional (processing) state.

Mismatch: Discrepancy or conflict between objects, in particular a contradiction between propositions or beliefs.

Affective: Colloquially: concerned with or arousing feelings or emotions; emotional. In today’s psychology, “affective” is often used as a cover term for all emotional and related phenomena (emotions, moods, evaluations...).

Disappointment: The unpleasant feeling resulting from an expectation failure concerning a desired event, or put alternatively, the disconfirmation of the belief that the desired event would occur.

Conflict(s): See “mismatch.”

Computational Model(s): A computational model is a computer program that attempts to simulate a particular natural system or subsystem.

Misexpected: A proposition p is misexpected for an agent A if p is detected by A (or a subsystem of A) to conflict with, or to mismatch, a pre-existing, specific and usually explicit belief of A regarding p. In contrast, p is unexpected for A in the narrow sense of the word if p is detected by A to be inconsistent with A’s background beliefs. Finally, p is unexpected for A in the wide sense of the term if p is either misexpected for A, or unexpected in the narrow sense.

Agent(s): An autonomous entity capable of action.

Astonishment: A subform of surprise distinguished from regular surprise, according to different authors, by higher intensity, longer duration, or special causes (e.g., fully unexpected events [astonishment] in contrast to misexpected events [ordinary surprise]).

Artificial Surprise: Surprise synthetized in machines (artificial agents), usually intended as a simulation of surprise in natural agents, specifically humans. Depending on context, “surprise” may either refer to the mechanism that produces surprise, or to its product, the surprise generated.

Unexpected: A proposition p is unexpected for an agent A if p was explicitly or implicitly considered unlikely or improbable to be true by A, but is now regarded as true by A.

Anticipation: In humans, “anticipation” refers to the mental act or process of “looking forward” by means of forming predictions or beliefs about the future. An anticipatory agent is a natural or artificial agent who makes decisions based on predictions, expectations, or beliefs about the future.

Expectation: In common parlance, an expectation is a belief regarding a future state of affairs. In the literature on surprise, “expectation” is frequently used synonymously with “belief”.

Complete Chapter List

Search this Book:
Reset