Intelligent Agents with Personality: From Adjectives to Behavioral Schemes

Intelligent Agents with Personality: From Adjectives to Behavioral Schemes

François Bouchet (McGill University, Canada) and Jean-Paul Sansonnet (LIMSI-CNRS, France)
DOI: 10.4018/978-1-4666-1628-8.ch011


Conversational agents are a promising interface between humans and computers, but to be acceptable as the virtual humans they pretend to be, they need to be given one of the key elements used to define human beings: a personality. As existing personality taxonomies have been defined only for description, the authors present in this chapter a methodology dedicated to the definition of a computationally-oriented taxonomy, in order to use it to implement personality traits in conversational agents. First, a significant set of personality-traits adjectives is registered from thesaurus sources. Then, the lexical semantics related to personality-traits is extracted while using the WordNet database, and it is given a formal representation in terms of so-called Behavioral Schemes. Finally, the authors propose a framework for the implementation of those schemes as influence operators controlling the decision process and the plan/action scheduling of a rational agent.
Chapter Preview


Context: Intelligent Agents for Assisting Human Users

Intelligent agents are autonomous software entities, which perceive their environment and acts on it to accomplish their goals. Among elements of the environment of such agents are human beings. Some categories of agents have been more and more in interaction with human users, sometimes becoming themselves the interface between the users and the system they want to use. Those are called the conversational agents (Maes, 1994; Cassell, Bickmore, Billinghurst, Campbell, Chang, Villiamsson, & Yan, 1999), and as many tend to be there to provide them with some kind of assistance, we’ll therefore focus here on this subcategory of agents that are the intelligent assistant agents (further referred to simply as agents). In that kind of situation, three entities are in bilateral interaction (a human user U, an intelligent assistant agent A, and a computer system S), where the user performs some activity on/with the system, and, at times, they can solicit the agent for general advice or for direct help upon the system or the task at hand, in which case the agent might have to interact with the system directly, on behalf of the user. In such a situation (further called a UAS situation), one can see that agents need to be both able to interact with:

  • A symbolic model of the application and a rational reasoning capacity about that model. In the following, we will refer to this part of an agent as the “rational agent.” Actually, intelligent assistant agents should be able to interact with a system in the same way as autonomous agents achieve practical reasoning, in order to perform tasks in a given environment.

  • The user, which requires: a) a conversational interface, which is often multimodal; b) processing, in a rational way, the input of user's requests in order to produce as output factual replies c) expressing the answers in a way that would look and sound similar to how a human being assisting the user in the same task would.

Complete Chapter List

Search this Book: