Hype or Ready for Prime Time?: Speech Recognition on Mobile Handheld Devices (MASR)

Hype or Ready for Prime Time?: Speech Recognition on Mobile Handheld Devices (MASR)

Dongsong Zhang (Department of Information Systems, University of Maryland, Baltimore County, Baltimore, MD, USA), Hsien-Ming Chou (Department of Information Systems, University of Maryland, Baltimore County, Baltimore, MD, USA) and Lina Zhou (Department of Information Systems, University of Maryland, Baltimore County, Baltimore, MD, USA)
Copyright: © 2012 |Pages: 16
DOI: 10.4018/jhcr.2012100103


The pervasiveness of mobile handheld devices and advancement in real-time continuous speech recognition technology has opened up a wide range of research opportunities in human-computer interaction for those devices. On the one hand, there has been an increasing amount of research on developing user-friendly speech recognition solutions and applications for mobile handheld devices. On the other hand, there are many distinct challenges in mobile speech recognition. Aiming to gain a good understanding of this emerging yet challenging area and provide a research map, this paper presents a state-of-the-art overview of this field. We will discuss three main architectures of mobile speech recognition systems, analyze their strengths and weaknesses, introduce some major research issues in the field, and highlight a number of major applications of speech recognition on handheld devices. The authors will also shed some light into important future research issues as a road map for researchers and practitioners.
Article Preview


The proliferation of mobile handheld devices (e.g., cell phones and PDAs) and significant advancement of wireless technologies and infrastructures have become a strong driving force of many mobile applications, such as ubiquitous information access and mobile healthcare services. According to Morgan Stanley Research, the growth of Web access through mobile handheld devices has outpaced the growth of desktop Internet access. Today, mobile handheld devices are no longer just tools for communication and personal information management, but also for social activities, entertainment, health monitoring (Martí & Delgado, 2004), location-based services, etc. They have been permeating into daily routines of many people.

Despite the tremendous affordability, flexibility, accessibility, and portability of mobile handheld devices, interacting with them has not always been effective and pleasant, which often suffers from a variety of significant usability problems attributable to their inherent physical constraints, such as small screen size, restricted interaction mechanisms, and low memory space. In particular, the physical or soft keypads of those devices are considerably small, clumsy to interact with, and error prone. Even with the latest touch-sensitive screens of cell phones such as Apple’s iPhones and HTC’s Android phones, the interaction through a soft keyboard remains a challenge. Users have to frequently zoom in or out in order to resize the content and to switch from one application to another, which increases users’ cognitive load, especially when users are on the move. In addition, users’ ability to interact with devices by hand could be severely hampered in situations when users have one or both hands occupied by something else, or are physically impaired with hand or vision problems. As a result, developing alternative user-friendly interaction mechanisms for handheld devices is essential.

One natural way to solve or alleviate the problem of interaction with a handheld device is to enable speech input. Empowered by automatic speech recognition (ASR) technology, speech input can be much faster and easier than manual input/interaction through buttons, stylus, and keyboards. Speech is a natural skill of human beings. Speech input does not require much attention or direct observation to use a device; it can reduce or even completely eliminate the need for hand use during interaction, making mobile devices much more accessible. In particular, speech recognition technology for mobile handheld devices (MASR) could fundamentally help users who are physically challenged or visually impaired interact with those devices.

In the past decade, there has been an increasing amount of research efforts toward developing MASR solutions (e.g., PocketSphinx) and applications. Some latest cell phones such as iPhone 4, Nexus One Smartphone, and Samsung Galaxy S III smartphone are equipped with speech recognition technology. In the meantime, this research field is still in its infancy. The growth of research interest is also accompanied by the increasing awareness of the actual intricacies of MASR, which often involves large vocabularies and complex processing. MASR not only inherits the challenges in recognizing natural speech from ASR systems on desktop computers, such as co-occurring ambient noise, continuous utterance, diversity in individual speakers' pronunciation, and out-of-vocabulary words (Zhou et al., 2006), but also faces significant resource barriers caused by the unique constraints of handheld devices and wireless networks. In addition, the often noisy environment could also pose challenges to the performance of speech recognition. This raises a key question about whether or not the ultimate goals of achieving full-fledged automatic speech recognition on mobile handheld devices can be truly achieved.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 10: 4 Issues (2019): Forthcoming, Available for Pre-Order
Volume 9: 4 Issues (2018): Forthcoming, Available for Pre-Order
Volume 8: 4 Issues (2017): 3 Released, 1 Forthcoming
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing