Simplifying the Multimodal Mobile User Experience

Simplifying the Multimodal Mobile User Experience

Keith Waters
DOI: 10.4018/978-1-60566-978-6.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multimodality presents challenges within a mobile cellular network. Variable connectivity, coupled with a wide variety of handset capabilities, present significant constraints that are difficult to overcome. As a result, commercial mobile multimodal implementations have yet to reach the consumer mass market, and are considered niche services. This chapter describes multimodality with handsets in cellular mobile networks that are coupled to new opportunities in targeted Web services. Such Web services aim to simplify and speed up interactions through new user experiences. This chapter highlights some key components with respect to a few existing approaches. While the most common forms of multimodality use voice and graphics, new modes of interaction are enabled via simple access to device properties, call the Delivery Context: Client Interfaces (DCCI).
Chapter Preview
Top

Introduction

Many initial developments in multimodality focused on systems with unconstrained set of resources, such as those found in desktops, kiosks and rooms with high performance processors, large displays and hi-fidelity audio capabilities. In contrast, mobile cellular handsets are highly constrained, characterized by small screens, restrictive keyboards and intermittent network connectivity. Despite these restrictions, it has been suggested that multimodality should in fact be more useful in a wireless mobile environment (Kernchen and Tafazolli 05). This chapter endorses this view and further suggests that a simplified approach to mobile multimodality can be achieved through the incorporation of mobile device modes.

The recent emergence of the mobile Web and well-defined Web standards, allows the development of rich Web applications and services. Such data services are likely to reshape how users interact with their mobile cellular phones in the next few years. For example, today’s commercially available smart phones, such as the iPhone and handsets running the Google Android platform, are fully capable of rendering Web compliant content that go well beyond traditional limits of the Wireless Application Protocol (WAP) and mobile specific specifications of the Open Mobile Alliance (Alliance 07). Furthermore, Web standard compliant mobile browsers will reshape how mobile Web applications can be presented on mid-tier mobile cellular handsets. As a result, there are emerging opportunities to integrate novel and simplified mobile multimodal modes within a Web-based interaction.

When accessing Web services, multimodality is well suited to impoverished mobile interactions, especially when the screen is small and the inputs and outputs are both awkward and cumbersome. In such situations, interactions that follow a path-of-least-resistance are appropriate. For example, speech recognition can simply replace keyboard input when the user requires hands-free operation. Likewise, while some text fields can be filled via speech input, selecting items from a list using a stylus is often quicker. Larger tasks can thus be completed faster with multimodal interactions than with single modes alone. In addressing common multimodal tasks such as form filling, one must be sensitive to users’ needs and leverage good design principles. Speech interactions, for example, tend to be cumbersome for spatial tasks such as identifying regions on maps (Oviatt 00). In addition, offering both modalities all the time demands careful interaction design because it can be confusing to the user as to what modes to use when. Nevertheless, it is possible to demonstrate mobile multimodal systems that complete tasks faster than a single mode alone.

Mobility introduces additional dimensions to the multimodal user experience. Users who are in motion walking, on a loading dock engaged in moving items, or driving in an automobile are usually focusing on those tasks. In such situations, a multi-input interface is not feasible. The ability to switch between modes is an important multimodal capability, especially when single-handed or hands-free operation is demanded.

It has been recognized that commercial mobile multimodality has reached a crossroads, and the challenge of mobile multimodal services have direct dependencies on the specific capabilities of mobile devices (YamaKami 07). This chapter similarly concludes that combining multiple human modes of interaction coherently into a single well understood and well defined mobile standard is a challenge. Nevertheless, an alternative approach to multimodality is presented in this chapter that can simplify the user experience through the novel use of device-based modalities. This approach is both realistic and practical.

Mobile presence is one example of a mobile device modality. Undoubtedly mobile presence coupled to location will spur new types of mobile location-based services, and it is clear that multimodal systems will be able to capitalize on these newly available modes. Importantly, on board device sensors will be able to provide unambiguous environmental status as inputs to multimodal systems. For example, What is the status of my network? Event notifications can represent dynamically changing properties such as What is my device’s current location? An application’s changing patterns can also be represented, for example, Can my application automatically adapt from quiet to noisy street? Exposing these new forms of a device’s system level status at the level of Web markup, facilitate the process of integrating novel device properties which in turn can simplify mobile multimodal user experience.

Complete Chapter List

Search this Book:
Reset