The hypothesis that artificial emotion-like mechanisms can improve the adaptive performance of robots and intelligent systems has gained considerable support in recent years. To test this hypothesis, a mobile robot navigation system has been developed that employs affect and emotion as adaptation mechanisms. The robot’s emotions can arise from hard-coded interpretations of local stimuli, as well as from learned associations stored in global maps. They are expressed as modulations of planning and control parameters, and also as location-specific biases to path-planning. Our focus is on affective mechanisms that have practical utility rather than aesthetic appeal, so we present an extensive quantitative analysis of the system’s performance in a range of experimental situations.
Artificial affect representations can be broadly categorized into symbolic and neurophysiological models (Aylett, 2006). Symbolic models are typically favored by large-scale general-purpose AI frameworks, and emphasize cognitive roles of affect such as goal prioritization and memory management. They are often based on cognitive appraisal theories of emotion such as that proposed by Ortony et al. (1988). These types of models often have limited applicability in the robotics domain, where symbolic objects are not simply assumed to exist; they must be derived from real-world sensor data.
Thus, robotic implementations are typically more heavily inspired by neurobiological theories of emotion such as that proposed by Damasio (1999). Affect and emotions may be employed as internal ‘sensors’, or as discrete states that drive action selection. One of the main functions of this type of affect representation is to motivate a robot to respond quickly to certain events without waiting for its slower cognitive processes to ponder the situation. Affect is thus regarded as a potential replacement for deliberative processing in robotic controllers. Interactions between affect and deliberative processing have received little attention in the robotics domain, because they are often viewed as competitors for the same role.
One robotic affect model that has inspired various implementations is Valásquez’s Cathexis architecture (Valásquez, 1997), which models Ekman’s six basic emotions (anger, fear, happiness, sadness, disgust and surprise) (Ortony and Turner, 1990) as ‘proto-specialist’ agents (Minsky, 1986) executing in parallel. Emotions are one of several inputs that control behavior activation. A similar approach is adopted by Breazeal (2003) for the robotic head Kismet. In Kismet’s model, stimuli are tagged with three dimensions of affective information (valence, arousal and stance), and their associated emotional responses compete for activation in a winner-takes-all manner. In addition to driving certain cognitive processes, emotions are portrayed as variations in the robot’s facial expression, gaze direction and tone of voice.
Key Terms in this Chapter
Survival Drive: An affective state governing parameter modulations that directly influence the likelihood or potential outcomes of existence-threatening events such as collisions.
Deliberative Navigation: Following paths that are planned utilizing global maps constructed a priori and/or updated in response to environmental dynamics.
Affective Stimulus: A function of an internal or external event that elicits an affective response.
Dynamic Window: A rectangular search space of discrete linear and angular velocities bounded by a robot’s kinematic and dynamic constraints.
Reactive Control: Real-time selection of motor outputs in response to short-term sensor data or local map data.
Mapped Emotion: A set of emotional intensities associated with specific locations in the environment due to previous stimuli.
Strategic Drive: An affective state governing parameter modulations that alter cognitive strategies without directly affecting an intelligent system’s prospects for survival.
Global Emotion: A single intensity value representing an emotion elicited by stimuli perceived at the present time.