Improving Spoken Dialogue Understanding Using Phonetic Mixture Models

Improving Spoken Dialogue Understanding Using Phonetic Mixture Models

William Yang Wang, Ron Artstein, Anton Leuski, David Traum
DOI: 10.4018/978-1-61350-447-5.ch015
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

Reasoning about sound similarities improves the performance of a Natural Language Understanding component that interprets speech recognizer output: the authors observed a 5% to 7% reduction in errors when they augmented the word strings with a phonetic representation, derived from the words by means of a dictionary. The best performance comes from mixture models incorporating both word and phone features. Since the phonetic representation is derived from a dictionary, the method can be applied easily without the need for integration with a specific speech recognizer. The method has similarities with autonomous (or bottom-up) psychological models of lexical access, where contextual information is not integrated at the stage of auditory perception but rather later.
Chapter Preview
Top

Introduction

A standard architecture for spoken dialogue systems interprets the input language in two steps: first, an Automatic Speech Recognizer (ASR) transforms the user’s speech into a string of words, and then a Natural Language Understanding (NLU) component turns these words into a meaning representation. This architecture represents an efficient way to tackle the problem of understanding human speech by splitting it into two manageable chunks. However, it comes at a cost of an extremely narrow bandwidth for communication between the components: often the only information that passes from the speech recognizer to the NLU is a string of words, while other information contained in the speech signal is not accessible to the interpretation component (Litman et al., 2009; Raymond and Riccardi, 2007; Walker et al., 2000). If the ASR output string is deficient then the NLU will experience difficulties which may cause it to ultimately misunderstand the input. The most straightforward way to address this issue is to improve ASR accuracy, and in the long term, perfect or near-perfect ASR may make the NLU problem for speech systems much more straightforward than it currently is. In the meantime, however, we need to find ways that allow NLU better recovery from speech recognition errors.

This chapter addresses a particular kind of deficiency – speech recognition errors in which the ASR output has a different meaning than the actual speech input, but the two strings of words are phonetically similar. An example (taken from the experimental data described in the next section) is the question “Are you married?”, which in one instance was recognized as “Are you Mary?”. This question was presented to a conversational character who cannot understand the word “Mary”, and if he could he would probably give an inappropriate response. The character does know that he is quite likely to be asked if he is married; but since he is not aware that “Mary” and “married” sound similar, he cannot make the connection and infer the intended question. Such confusions in ASR output are very common, with varying levels of phonetic similarity between the speech input and ASR output. Some more subtle examples from the data include “Are all soldiers deployed?” misrecognized as “Are also just avoid”, and “just tell us how you can talk” misrecognized as “does tell aside can tell”.

Speech recognizers typically encode information about expected outputs by means of language models, which are probability distributions over output strings. However, language models cannot fully eliminate this kind of close phonetic deviation without severely limiting the flexibility of expression that users can employ. A typical response to the problem is to relax the strict separation between speech recognition and language understanding, allowing for more information to flow between the processes. A radical approach eschews the word-level representation altogether and interprets language directly from the phonetic representation; this has been shown to be useful in call routing applications (Alshawi, 2003; Huang and Cox, 2006). Milder approaches include building phonetic and semantic representations together (Schuler et al., 2009) or allowing NLU to select among competing ASR outputs (Chotimongkol and Rudnicky, 2001; Gabsdil and Lemon, 2004; Skantze, 2007). What is common to all of these approaches is that they work with the speech signal directly, and thus incur costs that are associated with working with speech data. Specifically, these approaches require a substantial amount of speech data for training, and each specific solution is committed to one particular speech recognizer with which the rest of the system is integrated.

We present a different approach: we accept the output of an off-the-shelf speech recognizer as-is (with trained domain-specific language models), and use a dictionary to endow the NLU component with a way to compute phonetic similarity between strings of words. We do not attempt to correct the ASR output through postprocessing as in Ringger (2000), and we deliberately ignore detailed information from the speech recognizer such as the word and phone lattices which are used internally by the speech recognizer for computing the most likely output. Our approach thus avoids the costs associated with training on speech data, allows replacing one off-the-shelf speech recognizer with another, and yields performance gains even when there is little or no speech data available to train with.

Complete Chapter List

Search this Book:
Reset