Auditory Augmentation

Auditory Augmentation

Till Bovermann, René Tünnermann, Thomas Hermann
DOI: 10.4018/978-1-4666-0038-6.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

With auditory augmentation, the authors describe building blocks supporting the design of data representation tools, which unobtrusively alter the auditory characteristics of structure-borne sounds. The system enriches the structure-borne sound of objects with a sonification of (near) real time data streams. The object’s auditory gestalt is shaped by data-driven parameters, creating a subtle display for ambient data streams. Auditory augmentation can be easily overlaid to existing sounds, and does not change prominent auditory features of the augmented objects like the sound’s timing or its level. In a peripheral monitoring situation, the data stay out of the users’ attention, which thereby remains free to focus on a primary task. However, any characteristic sound change will catch the users’ attention. This article describes the principles of auditory augmentation, gives an introduction to the Reim Software Toolbox, and presents the first observations made in a preliminary long-term user study.
Chapter Preview
Top

Introduction

The world around us is full of artificially gathered data. Upon that data we draw conclusions and make decisions, which possibly influence the future of our society. The difficulty hereby is not the data acquisition – we already have plenty – but our ability to process it (Goldhaber, 1997). Arising from this circumstance, at least two demands for data preparation can be identified: first, it should gain an appropriate amount of its user's attention depending on both the data domains’ nature and the users’ needs (Goldhaber, 2006), and second, it should utilise appropriate representations that truly integrate data and algorithmic functionality into the human life-world. Our awareness of being-in-the-world (Heidegger, 1927) is often caused by the intensiveness of multi-sensory stimuli. The experience of walking through a cavern, feeling a fresh breeze that contrasts with the pure solid rock under the feet, hearing echoes of footsteps and water drops serves as a good example for this: All the simultaneous impressions make us aware of our body and its integration into the cavern. The lack of a single sense or only a misleading impression would change the holistic interpretation of the scene. In traditional computer-related work, however, many of our senses such as hearing, taste or smell are underused. Historically developed paradigms such as the prominent Graphical User Interface (GUI) are not able to fully embed the user into the information to be mediated. Possible explanations for their nevertheless widespread use should be searched more in their (historically developed) technical feasibility (Sutherland, 1963), rather than in usability and user-oriented simplicity.

For about the past ten years, though, there has been a shift towards multimodal and tangible representations of computer-based processes and abstract data, which try to close the gap between the users’ reality and the abstract environment of data and algorithms. This takes us closer to data representations that benefit from the various aspects of the human’s being-in-the-world by incorporating other modalities than vision and general-purpose pointing devices. However, a key prerequisite for an effective and ergonomic interface to digitally stored data is that the interface designer takes care of the common interplay between the human and his environment and integrates the resulting interface into this complex interrelationship.

We argue that haptic feedback, feature-rich control, and the use of many modalities are essential to sufficiently mediate complex information from computers to humans. Tools to achieve this are for example tangible interfaces and auditory displays. While tangible user interfaces (TUI) provide rich and at the same time direct control over digitally stored data (Brave, Ishii, & Dahley, 1998), sound and therefore Auditory Displays (AD) are widely recognised as very direct and flexible in their dynamic allocation of user attention and information conveyance (Bovermann, Hermann, & Ritter, 2006). Tangible auditory interfaces (TAI), a superset of both AD and TUI, has been introduced as paradigm by the authors (Bovermann 2010). They provide valuable guidelines for tangible auditory interface design. We believe that this combination can, after Rohrhuber (Rohrhuber, 2008), help to unfold the true potential of ergonomic user interfaces (Bovermann, Groten, de Campo, & Eckel, 2007). TAIs offer an information-rich interface that allows users to select, interpret and manipulate presented data such that they particularly profit from their naturally excellent pattern recognition abilities.

Complete Chapter List

Search this Book:
Reset