Article Preview
TopIntroduction
Interaction with animals can be regarded as the gold standard of a rich, engaging, and gratifying experience where the user is fully immersed and focused (Beetz, Uvnäs-Moberg, Julius, & Kotrschal, 2012). It would seem that interacting with things that are alive has a quality distinct from an interaction with inanimate matter. In order to purposefully build systems that are seen as alive, we need to understand what inferences humans are making. The distinction between what is alive and what is not is a fundamental perceptual category in humans (Wiggett, Pritchard, & Downing, 2009) that can be regarded as an “evolutionarily adapted domain-specific knowledge systems” (Caramazza & Shelton, 1998). The fundamental nature of this faculty is highlighted by the fact that already young infants seem capable of distinguishing animate from inanimate (Poulin-Dubois, Lepage, & Ferland, 1996; Schlottmann & Ray, 2010). We know factually that entities that are alive, entities that look alive, and entities that display agency belong to three distinct but intersecting sets (Figure 1a). The relevant question in our context is what subjective heuristics people use when making inferences based on observation of, and interaction with an entity. It is known that humans use a number of rules such as presence of face-like feature, and movement to determine what is animate (Jipson & Gelman, 2007). We hypothesize that the main factors for the attribution of animacy are the appearance and the (assumed) agency of the entity. Agency, in turn, is inferred from the observed behavior (Figure 1b).
Figure 1. (a) Entities that are alive, entities that look alive, and entities that display agency belong to three distinct but intersecting sets. (b) Proposed heuristic used for determining if an entity is alive.
In other words, we are assuming that the factors of attribution of animacy can be divided into static (appearance) and dynamic (behavior). The notion that behavior is a factor that is distinct from appearance comes e.g. from studies of the perception of animacy in abstract shapes moving in biologically inspired ways (Scholl & Tremoulet, 2000).
Though the mechanism of animacy attribution will not be entirely trivial, we assume that the factors “agency” and “appearance” will, by and large, contribute in an additive fashion to an attribution of animacy. Interesting scenarios will arise when there is a disparity between the two factors: We assume that a low level of agency combined with an appearance that strongly suggests animacy, leads to the “uncanny valley” effect (Mori, 1970). In the present study, we investigate the inverse case: The combination of high agency with an appearance that is not lifelike. Specifically, we are investigating factors of the interactive behavior that lead to an attribution of agency. We are interested in identifying those characteristics of interaction that lead to an attribution of agency, and how this is related to specific kinds of user experience.
Most studies investigating factors of agency use a passive paradigm where participants observe pre-recorded stimuli (e.g. Schlottmann & Ray, 2010). In our study, we investigate attribution of agency through a real-time interaction with an artifact. To bypass the influence of appearance factors we exploit an artifact that is explicitly non-anthropomorphic: An interactive mixed reality space. The viability of this approach is grounded in earlier work developed in a similar space where we were able to show that humans do attribute the property of entity to the interactive space “Ada” (Eng, Douglas, & Verschure, 2005; Eng, Mintz, & Verschure, 2005). In the present study, we use a system that is a further step beyond Ada called eXperience Induction Machine (Bernardet et al., 2011).