Abstract Expressions of Affect

Abstract Expressions of Affect

Alwin de Rooij (Centre for Creativity in Professional Practice, City University London, London, UK), Joost Broekens (Interactive Intelligence Group, Delft University of Technology, Delft, The Netherlands) and Maarten H. Lamers (Leiden Institute for Advanced Computer Science, Leiden University, Leiden, The Netherlands)
Copyright: © 2013 |Pages: 31
DOI: 10.4018/jse.2013010101
OnDemand PDF Download:


What form should happiness take? And how is disgust shaped? This research investigates how synthetic affective expressions can be designed with minimal reference to the human body. The authors propose that the recognition and attribution of affect expression can be triggered by appropriately presenting the bare essentials used in the mental processes that mediate the recognition and attribution of affect. The novelty of the proposed approach lies in the fact that it is based on mental processes involved in the recognition of affect, independent of the configuration of the human body and face. The approach is grounded in (a) research on the role of abstraction in perception, (b) the elementary processes and features relevant to visual emotion recognition and emotion attribution, and (c) how such features can be used (and combined) to generate a synthetic emotion expression. To further develop the argument for this approach they present a pilot study that shows the feasibility of combining affective features independently of the human configuration by using abstraction to create consistent emotional attributions. Finally, the authors discuss the potential implications of their approach for the design of affective robots. The developed design approach promises a maximization of freedom to integrate intuitively understandable affective expressions with other morphological design factors a technology may require, providing synthetic affective expressions that suit the inherently artificial and applied nature of affective technology.
Article Preview


In the last decade human machine interaction research has seen the addition of research aimed at exploring the role of affect within and among humans to develop technologies that can function appropriately and intelligently in personal and social environments. This is fundamental to a variety of techno-scientific research areas such as affective computing (Picard, 1997) and social robotics (Fong et al., 2003). Such technologies can be applied as real-world research platforms for theoretical affective and social science (Cañamero, 2005), but mostly thrives on the promise of practical application in for example healthcare (Broekens et al., 2009), therapy (Dautenhahn et al., 2002), and education (Saerbeck et al., 2010).

At the core of these technologies lies the challenge of how to design an appearance that is intuitive to people in terms of social and affective interaction, but simultaneously satisfies technological and functional requirements. Current design strategies typically attempt to mimic the human or animal form realistically or iconically, often bolstered by design principles from character animation (Bartneck & Forlizzi, 2004; Blow et al., 2006; Fong et al., 2003; Hegel et al., 2009). There can however be situations where anthropomorphic or zoomorphic mimicry constrains the optimal design of affective technologies. For instance, affective communication benefits the design of a rescue robot by facilitating an intuitive warning signal to people, however the configuration of the human body may not be optimal for a rescue robot because it needs to operate in circumstances where humans cannot. Indeed, can you imagine a humanoid design effectively finding its way through small holes in a wall or corridors filled with rubble? This is just one example that illustrates the need for synthetic affective expressions that seamlessly integrate with other, often more important, morphological design requirements of a technology. However, little work has been done to develop such an alternative approach. We present such an alternative.

Recent research on visual emotion recognition offers substantial evidence that the recognition of some emotions neither requires the resemblance to, nor the configuration of, the human body or face per se (Aronoff, 2006; de Gelder et al., 1999; Lundqvist & Öhman, 2004). Instead, the recognition of these emotion expressions can rely on the sole presence of basic motion and form features essential to the recognition of emotion, which are extracted at the highest levels of abstraction in perception (Aronoff, 2006; Lundqvist & Öhman, 2004; Pavlova et al., 2005). Additionally, a large body of experimental research exists on emotion attribution to simple abstract geometrical shapes, based on such essential affective features (Aronoff, 2006; Aronoff et al., 1992; Collier, 1996; Heider & Simmel, 1944; Larson et al., 2008; Locher & Nodine, 1989; Oatley & Yuill, 1985; Pavlova et al., 2005; Rimé et al., 1985; Scholl & Tremoulet, 2000; Visch & Goudbeek, 2009). These theoretical insights motivate us to investigate the possibilities of emotion expression independent of the configuration of the human body and face, based on the minimal essential components of visual emotion recognition. However, developing a design strategy for affective robots based on these insights requires a novel and fundamentally different theoretical framework. This article proposes such a framework.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing