Bodily Engagement in Multimodal Interaction: A Basis for a New Design Paradigm?

Bodily Engagement in Multimodal Interaction: A Basis for a New Design Paradigm?

Kai Tuuri, Antti Pirhonen, Pasi Välkkynen
DOI: 10.4018/978-1-60566-978-6.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The creative processes of interaction design operate in terms we generally use for conceptualising human-computer interaction (HCI). Therefore the prevailing design paradigm provides a framework that essentially affects and guides the design process. We argue that the current mainstream design paradigm for multimodal user-interfaces takes human sensory-motor modalities and the related userinterface technologies as separate channels of communication between user and an application. Within such a conceptualisation, multimodality implies the use of different technical devices in interaction design. This chapter outlines an alternative design paradigm, which is based on an action-oriented perspective on human perception and meaning creation process. The proposed perspective stresses the integrated sensory-motor experience and the active embodied involvement of a subject in perception coupled as a natural part of interaction. The outlined paradigm provides a new conceptual framework for the design of multimodal user interfaces. A key motivation for this new framework is in acknowledging multimodality as an inevitable quality of interaction and interaction design, the existence of which does not depend on, for example, the number of implemented presentation modes in an HCI application. We see that the need for such an interaction- and experience-derived perspective is amplified within the trend for computing to be moving into smaller devices of various forms which are being embedded into our everyday life. As a brief illustration of the proposed framework in practice, one case study of sonic interaction design is presented.
Chapter Preview
Top

Introduction

In early days of human-computer interaction (HCI), the paradigm was mainly seen as a means to “synchronise” the human being and a computer (Card et. al., 1983). While the number of computer users rose rapidly and computers were suddenly in the hands of “the man in the street”, there was an evident need to make computers easier to use than when used by experts in computing. Psychologists were challenged to model the human mind and behaviour for the needs of user-interface design. It was thought that if we knew how the human mind works, user interfaces (UIs) could be designed to be compatible with it. To understand the human mind, computer metaphor was used. Correspondingly, multimodality has often meant that in interaction with a computer, several senses (“input devices”) and several motor systems (“output devices”) are utilised. This kind of cognitivist conceptualisation of the human being as a “smart device” with separate systems for input, central (symbolic) processing and motor activity has indeed been appealing from the perspective of HCI practices. However, contemporary trends of cognitive science have drifted away from such computer-based input-output model towards the idea of mind as emergent system which is structurally coupled with the environment as the result of the history of the system itself (Varela et. al, 1991). We thus argue that as a conceptual framework for HCI the traditional cognitivist approach is limited, as it conflicts with the contemporary view of the human mind and also with the common sense knowledge of the way we interact with our everyday environment (see Varela et. al., 1991; Noë, 2004; Lakoff & Johnson, 1999; Clark, 1997; Searle, 2004). One of the shortcomings of the traditional input-output scheme is that it implies that the capacity for perception could be disassociated from the capacities of thought and action (Noë, 2004).

For a long time, the development of UIs had been strongly focused on textual and graphical forms of presentation and interaction in terms of the traditional desktop setting. However, as computing becomes increasingly embedded into various everyday devices and activities, a clear need has been recognised to learn about the interaction between a user and a technical device when there is no keyboard, large display or mouse available. Therefore, the need to widen the scope of human-computer interaction design to exploit multiple modalities of interaction is generally acknowledged.

Within the mainstream paradigm of HCI design, conceptions of multimodality tend to make clear distinctions between interaction modalities (see, e.g. Bernsen, 1995). There is the fundamental division between perceiving (gaining feedback presentation from the system) and acting/doing (providing input to the system). These, in turn, have been split into several modality categories. Of course traditional distinctions of modalities have proved their usefulness as conceptual tools and have thus served many practical needs, as they make the analysis and development of HCI applications straightforward. However, too analytic and distinctive emphasis on interaction modalities may promote (or reflect) design practices where interaction between a user and an application is conceptualised in terms of technical instrumentation representing different input and output modalities. Such an approach also potentially encourages conceptualising modalities as channels of information transmission (Shannon & Weaver, 1949). Channel-orientation is also related to the ideal that information in interaction could be handled independently from its form and could thus be interchangeably allocated and coded into any technically available “channels”. We see that, in its application to practical design, the traditional paradigm for multimodality may hinder the design potential of truly multimodal interaction. We argue that in the design of HCI, it is not necessarily appropriate to handle interaction modalities in isolation, apart from each other. For instance, the use of haptics and audio in interaction, though referring to different perceptual systems, benefits from these modalities being considered together (Cañadas-Quesada & Reyes-Lecuona, 2006; Bresciani et. al., 2005; Lederman et. al., 2002). However, even recent HCI-studies of cross-modal interaction, although concerning the integration of modalities, still seem to possess the information-centric ideal of interchangeable channels (see e.g. Hoggan & Brewster, 2007).

Complete Chapter List

Search this Book:
Reset