Enhancement of Conversational Agents By Means of Multimodal Interaction

Enhancement of Conversational Agents By Means of Multimodal Interaction

Ramón López-Cózar, Zoraida Callejas, Gonzalo Espejo, David Griol
DOI: 10.4018/978-1-60960-617-6.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The main objective of multimodal conversational agents is to provide a more engaged and participative communication by allowing users to employ more than one input methodologies and providing output channels that are different to exclusively using voice. This chapter presents a detailed study on the benefits, disadvantages, and implications of incorporating multimodal interaction in conversational agents. Initially, it focuses on implementation techniques. Next, it explains the fusion and fission of multimodal information and focuses on the core module of these agents: the dialogue manager. Later on, the chapter addresses architectures, tools to develop some typical components of the agents, and evaluation methodologies. As a case of study, it describes the multimodal conversational agent in which we are working at the moment to provide assistance to professors and students in some of their daily activities in an academic centre, for example, a University’s Faculty.
Chapter Preview
Top

1. Introduction

Conversational agents can be defined as computer programs designed to interact with users similarly as a human being would do, using more or less interaction modalities depending on their complexity (McTear, 2004; López-Cózar & Araki, 2005). These agents are employed for a number of applications, including tutoring (Forbes-Riley & Litman, 2011; Graesser et al., 2001; Johnson & Valente, 2008), entertainment (Ibrahim & Johansson, 2002), command and control (Stent et al., 1999), healthcare (Beveridge & Fox, 2006), call routing (Paek & Horvitz, 2004) and retrieval of information about a variety of services, for example, weather forecasts (Maragoudakis, 2007), apartment rental (Cassell et al., 1999) and travels (Huang et al., 1999).

The implementation of the agents is a complex task in which a number of technologies take part, including signal processing, phonetics, linguistics, natural language processing, affective computing, graphics and interface design, animation techniques, telecommunications, sociology and psychology. The complexity is usually addressed by diving the implementation into simpler problems, each associated with an agent’s module that carries out specific functions, for example, automatic speech recognition (ASR), spoken language understanding (SLU), dialogue management (DM), natural language generation (NLG) and text-to-speech synthesis (TTS).

ASR is the process of obtaining a sentence (text string) from a voice signal (Rabiner & Juang, 1993). It is a very complex task given the diversity of factors that can affect the input, basically concerned with the speaker, the interaction context and the transmission channel. Different applications demand different complexity of the speech recognizer. Cole et al. (1997) identified eight parameters that allow an optimal tailoring of the speech recognizer: speech mode, speech style, dependency, vocabulary, language model, perplexity, signal-to-noise ratio (SNR) and transduction. Nowadays general-purpose speech recognition systems are usually based on Hidden Markov Models (HMMs).

SLU is the process of extracting the semantics from a text string (Minker, 1998). It generally involves employing morphological, lexical, syntactical, semantic, discourse and pragmatical knowledge. In a first stage lexical and morphological knowledge allow dividing the words in their constituents distinguishing lexemes and morphemes. Syntactic analysis yields a hierarchical structure of the sentences. Semantic analysis extracts the meaning of a complex syntactic structure from the meaning of its constituents. There are currently two major approaches to tackle the problem of spoken language understanding: rule-based (Mairesse et al., 2009) and statistical (Meza-Ruiz et al., 2008), including some hybrid methods (Liu et al., 2006).

The DM is responsible of deciding the next action to be carried out by the agent. One possible action is to initiate a database query to provide information to the user, for example, available flights connecting two cities. Another possible action is requesting additional data from the user necessary to make the database query, for example, date for a travel. A third typical action is confirming data obtained from the user, for example, departure and arrival cities. This last action is very important given the current limitations of state-of-the-art ASR.

Conversational agents can be divided into two types depending on the interaction modalities available: spoken and multimodal. The former type allows just speech as the interaction modality (McTear, 2004). Typically, these agents are used to provide telephone-based information, and are comprised of the five main technologies mentioned above, i.e., ASR, SLU, DM, NLG and TTS.

Complete Chapter List

Search this Book:
Reset