AudioAuth: Exploring the Design and Usability of a Sound-Based Authentication System

AudioAuth: Exploring the Design and Usability of a Sound-Based Authentication System

Karim Said, Ravi Kuber, Emma Murphy
Copyright: © 2015 |Pages: 19
DOI: 10.4018/IJMHCI.2015100102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper, the authors describe a novel design alternative to the traditional PIN-based user authentication process, through the selection of a sequence of abstract sounds. The authors conducted two studies as part of the research. The first study examines a user's ability to discriminate between sounds based on the manipulation of various sound characteristics. Results highlighted the benefits of timbre and spatial positioning as differentiators. They also found manipulations of pitch, rhythm, and spatial position further compliment a capacity for discerning between sounds. Using findings from the first study, they conducted a second study, which examined the usability of a sound-based authentication interface, AudioAuth. Finally, the authors conducted retrospective interviews with study participants to investigate the potential applicability of AudioAuth design concepts in a mobile context. The authors' insights gained from the research, including methodological lessons, offer guidance to interface designers interested in exploiting the potential of abstract sounds to support the user authentication workflows.
Article Preview
Top

Introduction

The notion of an auditory interface for user authentication lays at the intersection of several dimensions of information system design and development. Such a position requires the coalescence of interaction, interface, and security design considerations in a manner that duplicates – and preferably enhances – the ubiquitous experience of logging into a system via traditional methods (i.e., keyboard input of alphanumeric passwords). Novel designs of this type face the problem of successfully altering the mechanisms of an interaction pattern with strong mental models among users. Furthermore, as certain interaction patterns become pervasive and users universally begin to forge strong mental models of how those patterns “should work”; they also slowly shed their willingness to adopt alternatives despite their advantages. Further complicating the problem is the relative immaturity, in general, of non-visual computer interfaces, the need for high levels of accessibility among a diverse user population, as well as a host of issues regarding cognitive limitations and security trade-offs. For example “strong” passwords (e.g. a lengthy string of mixed case alphanumeric characters) are often recommended to reduce the likelihood of successful rudimentary attack methods (Adams et al, 1999), however, recollection of long or complex alphanumeric strings can be difficult to memorize and encourage users to bypass or ignore such precautionary measures.

Despite these challenges, the benefits of developing effective auditory interfaces are obvious. Apart from providing alternatives to users when their visual channel is blocked or restricted (e.g. individuals who are blind, or impaired by a given situation or environment), auditory interfaces offer the opportunity for richer multi-modal interaction patterns among the general user community (Blattner et al, 2012). Moreover, especially when addressing an interaction pattern as pervasive as user authentication, a successful auditory interface stands to benefit a large population by helping to universally reduce the overall barriers to technology usage.

Wearable interface designs and mobile devices with restricted visual displays create a further demand for non-visual feedback for users with and without sensory impairments. A ubiquitous design should incorporate a users’ mobile state such as walking or driving where a sighted user is described as “situation blind” as they cannot access a visual display. However current small screen mobile phone devices relying on touch-screen and visual feedback are not always accessible to users in a state of situation blindness or indeed to visually impaired users. Studies have indicated that the use of non-speech audio can help improve access to graphical user interfaces (Brewster, 2002), by reducing the burden on other senses, such as vision.

The present study aims to identify ways in which audio can be used to support the user authentication interaction pattern, with a view to providing an alternative to alphanumeric passwords, and exploring the potential for usage in a mobile context. While past research has explored the use of auditory icons and short bursts of music for purposes of authentication, these designs may suffer from security tradeoffs resulting from the recognizability and inherent biases implicit in such sound libraries. In contrast, we have examined the feasibility of abstract sounds, which are more difficult to describe and therefore disclose to others. We have aimed to identify their potential for retention over an extended period, and to better understand the usability considerations inherent to such an interface. We have also sought to explore the feasibility of an auditory authentication system for use in a mobile context.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 1 Issue (2023)
Volume 14: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 13: 1 Issue (2021)
Volume 12: 3 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing