Auditory Augmentation

Auditory Augmentation

Till Bovermann, René Tünnermann, Thomas Hermann
Copyright: © 2010 |Pages: 15
DOI: 10.4018/jaci.2010040102
(Individual Articles)
No Current Special Offers


With auditory augmentation, the authors describe building blocks supporting the design of data representation tools, which unobtrusively alter the auditory characteristics of structure-borne sounds. The system enriches the structure-borne sound of objects with a sonification of (near) real time data streams. The object’s auditory gestalt is shaped by data-driven parameters, creating a subtle display for ambient data streams. Auditory augmentation can be easily overlaid to existing sounds, and does not change prominent auditory features of the augmented objects like the sound’s timing or its level. In a peripheral monitoring situation, the data stay out of the users’ attention, which thereby remains free to focus on a primary task. However, any characteristic sound change will catch the users’ attention. This article describes the principles of auditory augmentation, gives an introduction to the Reim Software Toolbox, and presents the first observations made in a preliminary long-term user study.
Article Preview

1. Introduction

The world around us is full of artificially gathered data. Upon that data we draw conclusions and make decisions, which possibly influence the future of our society. The difficulty hereby is not the data acquisition – we already have plenty – but our ability to process it (Goldhaber, 1997). Arising from this circumstance, at least two demands for data preparation can be identified: first, it should gain an appropriate amount of its user's attention depending on both the data domains’ nature and the users’ needs (Goldhaber, 2006), and second, it should utilise appropriate representations that truly integrate data and algorithmic functionality into the human life-world. Our awareness of being-in-the-world (Heidegger, 1927) is often caused by the intensiveness of multi-sensory stimuli. The experience of walking through a cavern, feeling a fresh breeze that contrasts with the pure solid rock under the feet, hearing echoes of footsteps and water drops serves as a good example for this: All the simultaneous impressions make us aware of our body and its integration into the cavern. The lack of a single sense or only a misleading impression would change the holistic interpretation of the scene. In traditional computer-related work, however, many of our senses such as hearing, taste or smell are underused. Historically developed paradigms such as the prominent Graphical User Interface (GUI) are not able to fully embed the user into the information to be mediated. Possible explanations for their nevertheless widespread use should be searched more in their (historically developed) technical feasibility (Sutherland, 1963), rather than in usability and user-oriented simplicity.

For about the past ten years, though, there has been a shift towards multimodal and tangible representations of computer-based processes and abstract data, which try to close the gap between the users’ reality and the abstract environment of data and algorithms. This takes us closer to data representations that benefit from the various aspects of the human’s being-in-the-world by incorporating other modalities than vision and general-purpose pointing devices. However, a key prerequisite for an effective and ergonomic interface to digitally stored data is that the interface designer takes care of the common interplay between the human and his environment and integrates the resulting interface into this complex interrelationship.

We argue that haptic feedback, feature-rich control, and the use of many modalities are essential to sufficiently mediate complex information from computers to humans. Tools to achieve this are for example tangible interfaces and auditory displays. While tangible user interfaces (TUI) provide rich and at the same time direct control over digitally stored data (Brave, Ishii, & Dahley, 1998), sound and therefore Auditory Displays (AD) are widely recognised as very direct and flexible in their dynamic allocation of user attention and information conveyance (Bovermann, Hermann, & Ritter, 2006). Tangible auditory interfaces (TAI), a superset of both AD and TUI, has been introduced as paradigm by the authors (Bovermann 2010). They provide valuable guidelines for tangible auditory interface design. We believe that this combination can, after Rohrhuber (Rohrhuber, 2008), help to unfold the true potential of ergonomic user interfaces (Bovermann, Groten, de Campo, & Eckel, 2007). TAIs offer an information-rich interface that allows users to select, interpret and manipulate presented data such that they particularly profit from their naturally excellent pattern recognition abilities.

Complete Article List

Search this Journal:
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing