Multimodal and Multichannel Issues in Pervasive and Ubiquitous Computing

Multimodal and Multichannel Issues in Pervasive and Ubiquitous Computing

José Rouillard
DOI: 10.4018/978-1-60566-978-6.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multimodality in mobile computing has become a very active field of research in the past few years. Soon, mobile devices will allow smooth and smart interaction with everyday life’s objects, thanks to natural and multimodal interactions. In this context, this chapter introduces some concepts needed to address the topic of pervasive and ubiquitous computing. The multi-modal, multi-channel and multidevice notions are presented and are referenced by the name and partial acroynm “multi-DMC”. A multi-DMC referential is explained, in order to understand what kind of notions have to be sustained in such systems. Next we have three case studies that illustrate the issues faced when proposing systems able to support at the same time different modalities including voice or gesture, different devices, like PC or smartphone and different channels such as web or telephone.
Chapter Preview
Top

Introduction

For the general public, the year 1984 marks the emergence of WIMP (Windows, Icon, Menu, Pointing device) interfaces. Developed at Xerox PARC in 1973 and popularized by the Macintosh, this type of graphical user interfaces is still largely used today, on most computers. However, in recent years, numerous scientific researches focus on post-WIMP interfaces. It is no longer limited to a single way of interacting with a computer system, but considering the different solutions to offer user interfaces as natural as possible.

With the introduction of many types of mobile devices, such as cellphones, Personal Digital Assistant (PDA), pocket PC, and the rise of their capabilities (Wifi, GPS, RFID, NFC...) designing and deploying mobile interactive software that optimize the human-computer interaction has become a fundamental challenge. Modern terminals are natively equipped with many input and output resources needed for multimodal interactions, such as camera, vibration, accelerometer, stylus, etc. However, the main difference between multimedia and multimodal interaction lies in the semantic interpretation and the time management.

Multimodality in mobile computing appears as an important trend, but a very few applications allow a real synergic multimodality. Yet, since the famous Bolt’s (“put that there”) paradigm (Bolt 1980), researchers are studying models, frameworks, infrastructure and multimodal architecture allowing relevant use of the multimodality, especially in mobile situations. Multimodality tries to combine interaction means to enhance the ability of the user interface adaptation to its context of use, without requiring costly redesign and reimplementation. Blending multiple access channels provides new possibilities of interaction to users. The multimodal interface promises to let users choose the way they would naturally interact with it. Users have the possibility to switch between interaction means or to multiple available modes of interaction in parallel.

Another field of research in which multimodality is playing an important role is in the Computer Supported Cooperative Work domain (CSCW). CSCW is commonly seen as the study of how groups of people can work together using technology in a shared time, space hardware and software relationship. “In the context of ubiquitous and mobile computing, this situation of independent and collocated users performing unrelated tasks is however very likely to occur.” (Kray & al. 2004). Even if there is a risk of overlapping categories, design issues are often classified into management, technical and social issues.

  • Management: mainly deals with registration and later identification of users and devices as they enter and leave the workspace environment.

  • Technical: issue occurs with the control of specific device features and also the technical management of services offering the possibility to introduce (discover) or remove specific components from an interaction. The design problems is then related to fusion (i.e. combining multiple input types) and fission (i.e. combining multiple output types) mechanisms, synchronization and rules management between heterogeneous devices.

  • Social issues are more related to social rules and privacy matters. As we know, some devices are inherently unsuitable for supporting privacy, such as microphones, speakers and public displays.

Complete Chapter List

Search this Book:
Reset