Article Preview
TopIntroduction
In the last decade human machine interaction research has seen the addition of research aimed at exploring the role of affect within and among humans to develop technologies that can function appropriately and intelligently in personal and social environments. This is fundamental to a variety of techno-scientific research areas such as affective computing (Picard, 1997) and social robotics (Fong et al., 2003). Such technologies can be applied as real-world research platforms for theoretical affective and social science (Cañamero, 2005), but mostly thrives on the promise of practical application in for example healthcare (Broekens et al., 2009), therapy (Dautenhahn et al., 2002), and education (Saerbeck et al., 2010).
At the core of these technologies lies the challenge of how to design an appearance that is intuitive to people in terms of social and affective interaction, but simultaneously satisfies technological and functional requirements. Current design strategies typically attempt to mimic the human or animal form realistically or iconically, often bolstered by design principles from character animation (Bartneck & Forlizzi, 2004; Blow et al., 2006; Fong et al., 2003; Hegel et al., 2009). There can however be situations where anthropomorphic or zoomorphic mimicry constrains the optimal design of affective technologies. For instance, affective communication benefits the design of a rescue robot by facilitating an intuitive warning signal to people, however the configuration of the human body may not be optimal for a rescue robot because it needs to operate in circumstances where humans cannot. Indeed, can you imagine a humanoid design effectively finding its way through small holes in a wall or corridors filled with rubble? This is just one example that illustrates the need for synthetic affective expressions that seamlessly integrate with other, often more important, morphological design requirements of a technology. However, little work has been done to develop such an alternative approach. We present such an alternative.
Recent research on visual emotion recognition offers substantial evidence that the recognition of some emotions neither requires the resemblance to, nor the configuration of, the human body or face per se (Aronoff, 2006; de Gelder et al., 1999; Lundqvist & Öhman, 2004). Instead, the recognition of these emotion expressions can rely on the sole presence of basic motion and form features essential to the recognition of emotion, which are extracted at the highest levels of abstraction in perception (Aronoff, 2006; Lundqvist & Öhman, 2004; Pavlova et al., 2005). Additionally, a large body of experimental research exists on emotion attribution to simple abstract geometrical shapes, based on such essential affective features (Aronoff, 2006; Aronoff et al., 1992; Collier, 1996; Heider & Simmel, 1944; Larson et al., 2008; Locher & Nodine, 1989; Oatley & Yuill, 1985; Pavlova et al., 2005; Rimé et al., 1985; Scholl & Tremoulet, 2000; Visch & Goudbeek, 2009). These theoretical insights motivate us to investigate the possibilities of emotion expression independent of the configuration of the human body and face, based on the minimal essential components of visual emotion recognition. However, developing a design strategy for affective robots based on these insights requires a novel and fundamentally different theoretical framework. This article proposes such a framework.