Nonspeech Audio-Based Interfaces

Nonspeech Audio-Based Interfaces

Shigueo Nomura, Takayuki Shiose, Hiroshi Kawakami, Osamu Katai
Copyright: © 2009 |Pages: 10
DOI: 10.4018/978-1-60566-026-4.ch454
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Visual and auditory imagery combination offers a way of presenting and communicating complex events that emulate the richness of daily experience (Kendall, 1991). It is notable that sound events arise from the transfer of energy to a sound object in everyday life. Even in childhood, we learn to take the following attitudes about the sound events: • Recognize the occurrence of sound events and relate them to physical events. • Classify and identify heterogeneous sound events through a lifetime of experience. Important distinctions in the data can be communicated by exploiting simple categorical distinctions of sound events. Taste, smell, heat, and touch are not suitable channels for data presentation because our perception of them is not quantitative. However, the auditory system constitutes a useful channel for data presentation (Yeung, 1980). Furthermore, sounds play an important role in the study of complex phenomena through the use of auditory data representation according to Buxton (1990) and Kendall (1991). It is known that our ears and brains can extract information from nonspeech audio that cannot be, or is not visually displayed (Buxton, 1990).
Chapter Preview
Top

Introduction

Visual and auditory imagery combination offers a way of presenting and communicating complex events that emulate the richness of daily experience (Kendall, 1991). It is notable that sound events arise from the transfer of energy to a sound object in everyday life. Even in childhood, we learn to take the following attitudes about the sound events:

  • Recognize the occurrence of sound events and relate them to physical events.

  • Classify and identify heterogeneous sound events through a lifetime of experience.

Important distinctions in the data can be communicated by exploiting simple categorical distinctions of sound events. Taste, smell, heat, and touch are not suitable channels for data presentation because our perception of them is not quantitative. However, the auditory system constitutes a useful channel for data presentation (Yeung, 1980).

Furthermore, sounds play an important role in the study of complex phenomena through the use of auditory data representation according to Buxton (1990) and Kendall (1991). It is known that our ears and brains can extract information from nonspeech audio that cannot be, or is not visually displayed (Buxton, 1990).

Top

Background

Nonspeech Audio

According to Wall and Brewster (2006), nonspeech audio can be delivered in a shorter time than synthetic speech. Synthetic speech audio can be laborious and time consuming to listen to and compare many values through speech alone. So, nonspeech audio can be a better means at providing an overview of the data.

Researchers, such as Bronstad, Lewis, and Slatin (2003) have investigated whether the use of nonspeech audio cues can reduce cognitive workload to users performing very complex tasks that they would otherwise find impossible.

Nonspeech audio researchers have investigated sounds more complex than ubiquitous interrupting beeps to provide information about spatial structure to computer users. In this way, the vOICe Learning Edition (Jones, 2004) is an actual example of interface that translates arbitrary video images from an ordinary camera into nonspeech sounds. However, the artificial sounds adopted by the vOICe have no analogs in everyday listening. So, this kind of interfaces has required extensive trials from users before their effective use.

Key Terms in this Chapter

Echolocation: It is based on the principle that listeners can process the returning echo information when they emit sound waves and listen to the echoes that return from a target.

3-D Acoustic Environment: It provides learners with guided and unguided practice controlling audio parameters by software. These parameters can be adjusted to suit the specific needs of a learner’s auditory experience during a computer simulation.

Everyday Listening: It is the experience of listening to events rather than sounds. Listening to airplanes, water, birds, and footsteps are some examples of everyday listening. Everyday tasks such as driving and crossing the street are examples due to everyday listening too.

Aural Surface: It consists of a kind of virtual wall constituted by the generated nonspeech sounds in the 3-D acoustic environment.

Spatial Structure Perception: It refers to perception of size, shape, and texture of targets (objects) by processing the returned echoes through echolocation.

Conceptualization: It is the result of spatial structure categorization after processing the visual, aural, or echo information captured in nonspeech audio cues.

Nonspeech Audio: It can be delivered in a shorter time than synthetic speech. It is a better means at providing an overview of the data. Since nonspeech audio can be heard from 360 degrees, auditory information is better captured than visual information.

Virtual Hallway: It is a kind of virtual corridor in the 3-D acoustic environment where a listener travels to anticipate and perceive alterations on aural surfaces during the navigation task. There are three types of virtual hallways characterized by appearances of aural surfaces and another three types corresponding to disappearances.

Complete Chapter List

Search this Book:
Reset