Model-Based Target Sonification in Small Screen Devices: Perception and Action

Model-Based Target Sonification in Small Screen Devices: Perception and Action

Parisa Eslambolchilar, Andrew Crossan, Roderick Murray-Smith, Sara Dalzel-Job, Frank Pollick
DOI: 10.4018/978-1-59904-871-0.ch029
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this work, we investigate the use of audio and haptic feedback to augment the display of a mobile device controlled by tilt input. The questions we answer in this work are: How do people begin searching in unfamiliar spaces? What patterns do users follow and which techniques are employed to accomplish the experimental task? What effect does a prediction of the future state in the audio space, based on a model of the human operator, have on subjects’ behaviour? In the pilot study we studied subjects’ navigation in a state space with seven randomly placed audio sources, displayed via audio and vibrotactile modalities. In the main study, we compared only the efficiency of different forms of audio feedback. We ran these experiments on a Pocket PC instrumented with an accelerometer and a headset. The accuracy of selecting, exploration density, and orientation of each target was measured. The results quantified the changes brought by predictive or “quickened” sonified displays in mobile, gestural interaction. Also, they highlighted subjects’ search patterns and the effect of a combination of independent variables and each individual variable in the navigation patterns.

Key Terms in this Chapter

Continuous Control: A continuous control system measures and adjusts the controlled quantity in continuous-time.

Sonification: The use of nonspeech audio to convey information or perceptualize data.

Manual Control: A branch of control theory that is used to analyse human and system behaviour when operating in a tightly coupled loop.

Sound Localisation: The act of using aural cues to identify the location of specific sound sources.

Sonically Enhanced Interfaces: Interfaces where sound represent actions or content.

Gestural Interfaces: Interfaces where computers use gestures of the human body, typically hand movements, but in some cases other limbs can be used, for example, head gestures.

Prediction Horizon: How far ahead the model predicts the future. When the prediction horizon is well matched to the lag between input and output, the user learns how to control the system more rapidly, and achieves better performance.

Nonspeech Sound: Audio feedback, that does not use human speech. The use of nonspeech sound in interaction has benefits such as the increase of information communicated to the user, the reduction of information received through the visual channel, the performance improvement by sharing information across different sensory modalities.

Haptic Interfaces: Convey a sense of touch via tactile or force-feedback devices.

Quickened Displays: Displays that show the predicted future system state, rather than the current measured, or estimated state.

Complete Chapter List

Search this Book:
Reset