Visualising Interactions on Mobile Multimodal Sys

Visualising Interactions on Mobile Multimodal Sys

Kristine Deray (University of Technology, Sydney, Australia) and Simeon Simoff (University of Western Sydney, Australia)
Copyright: © 2009 |Pages: 13
DOI: 10.4018/978-1-60566-386-9.ch023
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

The purpose of this chapter is to set design guidelines on visual representations of interactions for mobile multimodal systems. The chapter looks at the features of interaction as process and how these features are exposed in the data. It presents a three layer framework for designing visual representations for mobile multimodal systems and a method that implements it. The method is based on an operationalisation of the source-target mapping from the contemporary theory of metaphors. Resultant design guidelines are grouped into (i) a set of high-level design requirements for visual representations of interactions on mobile multimodal systems; and (ii) a set of specific design requirements for the visual elements and displays for representing interactions on mobile multimodal systems. The second set then is considered subject to an additional requirement – the preservation of the beauty of the representation across the relevant modalities. The chapter is focused on the modality of the output. Though the chapter considers interaction data from human to human interactions, presented framework and designed guidelines are applicable towards interaction in general.
Chapter Preview
Top

Introduction

Contemporary information and communications technology (ICT) offer an unprecedented degree of mobility, affecting the operating environments of human endeavours. These environments may span multiple and changing contexts, be that offices, homes, outdoors, hospitals or any other environments where ICT is embedded in the processes that operate in such environments. The concept of mobility of a device spans a range of platforms with different sizes, capabilities and interfaces. On the one end are familiar compact devices, such as mobile phones, personal digital assistants (PDAs) and integrated ICT devices, such as iPhone, when on the other end are emerging technologies of larger size, but still mobile and embedded in the environment, for instance, digital tables like Microsoft Surface platform1 which can be moved around in the physical environment in the same way as we arrange conventional furniture. Such systems, due to their spatial presence and relatively large interaction area, enable rich multimodal interaction (Tse et al., 2008), hence, require the development and support of efficient multimodal interfaces (Oviatt, 2003). Such interfaces require the development of new interaction paradigms in order to facilitate the design. Successful interface paradigms are recognised by their consistency and respective intuitive behaviour – a result of well-understood underlying metaphors and corresponding design patterns that can be articulated independently of any single application (Raman, 2003)

Many application areas, that utilise mobile systems, require communication of information about the interaction process. The way interactions unfold can tell us a lot about both the process and the outcomes of the interactions. For example, in health care, the treatment process is expected to benefit if we are able to present the information about how patient-practitioner interactions unfold, and if, based on that representation, one can judge about whether there has been a good communication or not, and if not, where were the bottlenecks (Deray and Simoff, 2007). In design, the participatory design process may benefit if we can present the information about how client-designer interactions unfold and if, based on that representation, one can judge about whether design communication has been good or not, and align interaction patterns with the emergence of design solutions in order to revisit intermediate design solutions (Simoff and Maher, 2000). In international negotiations, the negotiation process may benefit if we can present the information about how the interaction between negotiating parties unfolds, in order to monitor the flow of these interaction and provide indicators to a third-party trusted mediator to interfere before negotiations reach a deadlock (Simoff et al., 2008).

These examples emphasise the interplay between information about interaction and informed decision making in respective application domains. Such interplay set the key goals in the pursuit of efficient and effective means for information communication in mobile multimodal systems, in particular: (i) enabling information fusion naturally into human communication and human activities; (ii) increasing the reliability of interactions within the information-rich environments; (iii) delivering information at the right level of granularity, depending on the context of the decision making process; (iv) enabling visual analytics based on communicated information through the different modality channels. In order to address these goals it is important to develop methods of encoding information about the way interactions unfold that both humans and mobile systems are able to process and utilise in order to improve interactions and respective processes.

Further in this chapter we consider the specifics of interactions from the point of view of their visual representation in multimodal systems. The term “multimodal” has been broadly used in a number of disciplines. For the purpose of this chapter we adapt the definition of multimodal HCI system in (Jaimes and Sebe, 2007) to mobile systems. Multimodal mobile system is a mobile system that can respond to human inputs in more than one modality and can communicate its output to humans in more than one modality.

Complete Chapter List

Search this Book:
Reset