Multimodal Cues: Exploring Pause Intervals between Haptic/Audio Cues and Subsequent Speech Information

Multimodal Cues: Exploring Pause Intervals between Haptic/Audio Cues and Subsequent Speech Information

Aidan Kehoe, Flaithri Neff, Ian Pitt
DOI: 10.4018/978-1-60566-978-6.ch012
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

There are numerous challenges to accessing user assistance information in mobile and ubiquitous computing scenarios. For example, there may be little-or-no display real estate on which to present information visually, the user’s eyes may be busy with another task (e.g., driving), it can be difficult to read text while moving, etc. Speech, together with non-speech sounds and haptic feedback can be used to make assistance information available to users in these situations. Non-speech sounds and haptic feedback can be used to cue information that is to be presented to users via speech, ensuring that the listener is prepared and that leading words are not missed. In this chapter, we report on two studies that examine user perception of the duration of a pause between a cue (which may be a variety of non-speech sounds, haptic effects or combined non-speech sound plus haptic effects) and the subsequent delivery of assistance information using speech. Based on these user studies, recommendations for use of cue pause intervals in the range of 600 ms to 800 ms are made.
Chapter Preview
Top

Introduction

The proliferation of mobile computing devices is moving us towards a ubiquitous computing scenario of people and environments that are augmented with computational resources (Abowd et al., 2002). To accomplish tasks, users operate a variety of network-enabled devices such as Smart Phones, PDAs (Personal Digital Assistants) and hybrid devices that are increasingly powerful and sophisticated. Despite the best efforts of product designers to design for usability, there are still situations in which users need assistance to operate a product, access a service, or accomplish a task. In such situations, ubiquitous online assistance should be available to support the users in completing their goals.

In literature, this type of support is typically referred to as “online help” or “user assistance”. Traditionally, the term “online help” refers to the documentation available to support users in their usage of software applications, e.g., “brief, task-oriented modules of information” (Harris & Hosier, 1991). This type of material, and much more, is needed to support use of the broad range of interactive products and services available to users today.

In recent years, the term “online help” has been gradually replaced in technical literature by the broader term “user assistance”. One definition of user assistance is “the information channels that help users evaluate, learn, and use software tools” (BCS, 2001). This broader definition includes “other forms of online documentation, such as quick tours, online manuals, tutorials, and other collections of information that help people use and understand products” (Gelernter, 1998).

To date, much of the research relating to user assistance has been focused on the users of software packages in a desktop/laptop usage scenario, i.e., the user has a mouse, keyboard and large monitor. Studies in these usage scenarios have shown that mainstream user assistance approaches can be effective (Grayling, 2002; Hackos & Stevens, 1997; Horton, 1994), but there are also many documented difficulties and limitations associated with these approaches (Carroll & Rosson 1987; Delisle & Moulin 2002; Rosenbaum & Kantner, 2005).

User assistance systems that evolved for use in supporting desktop/laptop software applications have been adapted and are now used on mobile handheld devices, even though these devices have significantly different capabilities, form factors and usage scenarios. As a result, many of the problems associated with desktop/laptop user assistance functionality also exist on smart mobile devices. In some cases the usability issues on these platforms are even more severe.

To date, most user assistance material has been developed under the assumption that the material will be read, either on a visual display or in print format. However, display of assistance material on portable devices with small display sizes, limited resolution and fonts is difficult. There is typically very little space available to display assistance information in the context of the application user interface. Small amounts of pop-up text can be displayed, but this also risks obscuring important information in the application user interface itself. On many handheld platforms, the user must switch to an independent “help viewer” program to view assistance material, i.e., they move away from the associated application window. Such a context switch has been shown to be problematic in desktop applications (Kearsley, 1988; Hackos & Stevens 1997), and similar problems can be expected in mobile usage scenarios too. Reading on small form factor devices also presents numerous challenges (Marshall, 2002).

Speech technology can be used to enable access to user assistance material in a variety of scenarios which are problematic for traditional user assistance access methods, e.g., when the user’s hands/eyes are busy, or where there is limited or no visual display, etc.

Complete Chapter List

Search this Book:
Reset