Personalized Acoustic Interfaces for Human-Computer Interaction

Personalized Acoustic Interfaces for Human-Computer Interaction

Jan Rennies, Stefan Goetze, Jens-E. Appell
DOI: 10.4018/978-1-60960-177-5.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The importance of personalized and adaptable user-interfaces has been extensively discussed (European Ambient Assisted Living Innovation Alliance, 2009; Alexandersson et al., 2009). However, it often remains unclear how to specifically implement such concepts. In the field of acoustic communication, existing models and technologies offer a wide range of possibilities. Based on these technologies, this chapter presents a concrete realization of a model-based interface in the field of acoustic human-computer interaction. The core element of the implementation is a holistic approach towards a hearing perception model, which incorporates information of the acoustic environment, the context and the user himself provides relevant information for control and adjustment of adaptable and personalized acoustic user interfaces. In principle, this way of integrating state-of-the-art technologies and models into user interfaces could be applied to other sensory perceptions as e.g. vision.
Chapter Preview
Top

Introduction

Together with vision, hearing is the most important human sense. The ability to perceive sound enables us to locate and classify sound sources and forms the basis of our orientation and communication. Both in private and at work, speech communication is of utmost importance, and has been largely influenced by advances in modern technology. A wide range of applications is available to facilitate acoustic interaction between people, ranging from mobile communication devices to video-conferencing systems. In the past years, the prevalence of computer-based applications has given rise to an increased importance of human-machine interaction. Acoustic information transfer between users and computers can be bidirectional, i.e. the user can both receive acoustic signals (e.g. spoken information) and interact with the system, e.g. by speech commands. Voice-controlled systems are particularly useful when human-machine communication is needed in hands-free applications or when the user is unable to use other means of input. Many other fields in modern societies could benefit from a well-working acoustic human-computer interaction. So far, however, the particular needs of the individual users have not been carefully taken into account in the design process and practical application of acoustic user interfaces. Particularly, but not exclusively, the significant part of people with hearing deficiencies could benefit from adaptable and personalized acoustic user interfaces. In modern societies, hearing impairments are widely spread. Recent figures estimate that about 16% of the population in industrialized countries suffer from hearing deficiencies (Shield, 2006). Due to age related deterioration of nerve cells in the inner ear, this percentage is much higher in older subgroups of the population. Different estimates report that between 37% and 56% of the population aged 60 to 70 years suffer from hearing loss (Uimonen et al., 1999; Sohn, 2001; Davis, 2003; Johansson and Arlinger, 2003). In the light of the demographic change, the number of hearing-impaired people is expected to increase rapidly in the next ten years and to almost double by 2030 (Shield, 2006).

Given the growing importance of human-computer interaction, acoustic interfaces should be accessible to the whole population in as many applications as possible. Particularly for hearing-impaired people, but also for people having special needs in communication, like e.g. jet pilots, the interfaces have to be adaptable to different environments and situations and personalized individually.

This chapter proposes a perceptual approach, which aims at a concrete realization of a model-based interface in the field of acoustic human-computer interaction. The core element of the implementation is a holistic approach towards the inclusion of a hearing perception model, which incorporates information of the acoustic environment (e.g. reverberation time, damping of walls and ceilings), the current acoustic context (e.g. presence of noise sources), as well as information on the individual user himself (e.g. hearing loss). The model can thereby provide relevant information for the control and adjustment of the interfaces for user interaction.

This chapter is organized as follows. Firstly, the factors influencing acoustic communication in daily life are described and the particular difficulties for hearing-impaired users of acoustic interfaces are summarized. Secondly, state-of-the-art technologies to increase the accessibility of acoustic communication systems are surveyed and their possibilities and limitations to support acoustic human-machine interaction are discussed. It is shown that existing models and technologies already offer a wide range of support, but that a combined approach based on the individual perception of sound is needed to realize adaptable and personalized acoustic user interfaces. Such an approach is presented subsequently before specific applications are illustrated. In the end, the chapter is briefly summarized and concluded.

Complete Chapter List

Search this Book:
Reset