Toward a Novel Human Interface for Conceptualizing Spatial Information in Non-Speech Audio

Toward a Novel Human Interface for Conceptualizing Spatial Information in Non-Speech Audio

Shigueo Nomura (Kyoto University, Japan), Takayuki Shiose (Kyoto University, Japan), Hiroshi Kawakami (Kyoto University, Japan), Osamu Katai (Kyoto University, Japan) and Keiji Yamanaka (Federal University of Uberlândia, Brazil)
DOI: 10.4018/978-1-59904-871-0.ch040
OnDemand PDF Download:
$37.50

Abstract

We developed a concept of interfaces using nonspeech audio for building wearable devices to support visually impaired persons. The main purpose is to enable visually impaired persons to freely conceptualize spatial information by nonspeech audio without requiring conventional means, such as artificial pattern recognition and voice synthesizer systems. Subjects participated in experiments to evaluate their ability to localize pattern-associated sounds. During the experiments, the subjects navigated through various virtual 3-D acoustic environments. The experimental results showed that sound effects, such as reverberation and reflection and variable z-coordinate movement, enhance the ability to localize pattern-associated sounds. The subjects were also evaluated on their ability to conceptualize spatial information based on cues in “artificial” and “natural” sounds. The evaluation results revealed that “natural” sounds are essential for improving everyday listening skills and the ability to conceptualize spatial information.

Key Terms in this Chapter

Spatial Information: Such information as size, shape, and texture of objects from the pattern of the reflected and reverberated sounds

Everyday Listening: The experience of listening to events rather than sounds, that is, skills relied upon in everyday tasks such as driving and crossing the street

Sound Localization: Learning process for visually impaired persons considered as perceptual systems to localize pattern-associated sounds in a virtual 3-D acoustic space

Conceptualization: Task for visually impaired persons to construct categories by capturing spatial information cues in nonspeech audio

“Artificial” Sounds: Have no analogs in everyday listening and require extensive trials before actually using the interface

“Natural” Sounds: Consider everyday listening to provide friendly information for users without hard cross-modal training

Pattern-Associated Sound: Represents the most appropriate sound event to be perceived even by visually impaired persons

Nonspeech Audio: Has benefits such as the increase of information communicated to the user, the reduction of information received through the visual channel, the performance improvement by sharing information across different sensory modalities

Complete Chapter List

Search this Book:
Reset