Benefits, Challenges, and Research in Multimodal Mobile GIS

Benefits, Challenges, and Research in Multimodal Mobile GIS

Julie Doyle (University College Dublin, Ireland), Michela Bertolotto (University College Dublin, Ireland) and David Wilson (University of North Carolina at Charlotte, USA)
Copyright: © 2009 |Pages: 20
DOI: 10.4018/978-1-60566-386-9.ch018
OnDemand PDF Download:
No Current Special Offers


The user interface is of critical importance in applications that provide mapping services. It defines the visualisation and interaction modes for carrying out a variety of mapping tasks, and ease of use is essential to successful user adoption. This is even more evident in a mobile context, where device limitations can hinder usability. In particular, interaction modes such as a pen/stylus are limited and can be quite difficult to use while mobile. Moreover, the majority of GIS interfaces are inherently complex and require significant user training, which can be a serious problem for novice users such as tourists. In this chapter, we review issues in the development of multimodal interfaces for mobile GIS, allowing for two or more modes of input, as an attempt to address interaction complexity in the context of mobile mapping applications. In particular, we review both the benefits and challenges of integrating multimodality into a GIS interface. We describe our multimodal mobile GIS CoMPASS which helps to address the problem by permitting users to interact with spatial data using a combination of speech and gesture input, effectively providing more intuitive and efficient interaction for mobile mapping applications.
Chapter Preview


Verbal communication between humans is often supplemented with additional sensory input, such as gestures, gaze and facial expressions, to convey emotions. Multimodal systems that process two or more naturally co-occurring modalities, aim to emulate such communication between humans and computers. The rationale for multimodal HCI is that such interaction can provide increased naturalness, intuitiveness, flexibility and efficiency for users, in addition to being easy to learn and use. As such, there has been a growing emphasis in recent years on designing multimodal interfaces for a broad range of application domains.

Significant advances have been made in developing multimodal interfaces since Bolt’s original ‘Put-that-there’ demonstration (Bolt, 1980), which allowed for object manipulation through a combination of speech and manual pen input. This has been due, in large part, to the multitude of technologies available for processing various input modes and to advances in device technology and recognition software. A varied set of multimodal applications now exist that can recognise and process various combinations of input modalities such as speech and pen (Doyle et al, 2007), speech and lip movements (Benoit et al, 2000), tilting (Cho et al, 2007), and vision-based modalities including gaze (Qvarfordt & Zhai, 2005), head and body movement (Nickel & Stiefelhagen, 2003), and facial features (Constantini et al, 2005).

In addition to intuitive input modalities, the large range of relatively inexpensive mobile devices currently available, ensure applications supporting multimodality are available to a broad range of diverse users in society. As such, multimodal interfaces are now incorporated into various applications contexts, including healthcare (Keskin et al, 2007), applications for vision-impaired users (Jacobson, 2002), independent living for the elderly (Sainz Salces et al, 2006), and mobile GIS to name but a few. This latter application area represents the focus of our research and the subject of this chapter. Multimodal interfaces can greatly assist users in interacting with complex spatial displays in mobile contexts. Not only do such interfaces address the limited interaction techniques associated with mobile usage, but they provide the user with flexibility, efficiency and, most importantly, an intuitive, user-friendly means of interacting with a GIS. This is particularly beneficial to non-expert GIS users, for whom traditional GIS interfaces may be too difficult to operate.

This chapter presents an account of the most significant issues relating to multimodal interaction on mobile devices that provide geospatial services. In particular we focus on speech and pen input, where speech may take the form of voice commands or dictation, while pen input includes gestures, handwriting or regular stylus interaction to communicate intention. The contribution of this chapter is two-fold. First, we discuss the benefits of multimodal HCI for mobile geospatial users, in addition to providing an account of the challenges and issues involved in designing such interfaces for mobile GIS. Secondly, we provide a review of current state of the art in the area of multimodal interface design for mobile GIS. This includes a discussion of CoMPASS (Combining Mobile Personalised Applications with Spatial Services), the mobile mapping system that we have developed for use on a Tablet PC. We also present an account of comparable systems in the literature and discuss how these contrast with CoMPASS.

The motivation behind our research is to overcome some of the challenges of mobile systems and issues of complexity of GIS interfaces. Supporting multiple input modalities addresses the issue of limited interaction capabilities and allows users to choose the mode of interaction that is most intuitive to them, hence increasing user-friendliness of a mobile geospatial application.

Complete Chapter List

Search this Book: