Using Misunderstanding and Discussion in Dialog as a Knowledge Acquisition or Enhancement Procecss

Using Misunderstanding and Discussion in Dialog as a Knowledge Acquisition or Enhancement Procecss

Mehdi Yousfi-Monod
DOI: 10.4018/978-1-60566-144-5.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The work described in this chapter tackles learning and communication between cognitive artificial agents and trying to meet the following issue: Is it possible to find an equivalency between a communicative process and a learning process, to model and implement communication and learning as dual aspects of the same cognitive mechanism? Therefore, focus is on dialog as the only way for agents to acquire and revise knowledge, as it often happens in natural situations. This particular chapter concentrates on a learning situation where two agents, in a “teacher/student” relationship, exchange information with a learning incentive (on behalf of the student), according to a Socratic dialog. The teacher acts as the reliable knowledge source, and the student is an agent whose goal is to increase its knowledge base in an optimal way. The chapter first defines the nature of the addressed agents, the types of relation they maintain, and the structure and contents of their knowledge base. It emphasizes the symmetry between the interaction and knowledge management, by highlighting knowledge “repair” procedures launched through dialogic means. These procedures deal with misunderstanding, as a situation in which the student is unable to integrate new knowledge directly, and discussion, related to paradoxical information handling. The chapter describes learning goals and strategies, student and teacher roles within both dialog and knowledge handling. It also provides solutions for problems encountered by agents. A general architecture is then established and a comment on a part of the theory implementation is given. The conclusion is about the achievements carried out and the potential improvement of this work.
Chapter Preview
Top

Introduction

Recent research in Artificial Intelligence (AI) focusing on intelligent software agents has acknowledged the fact that communication has to be seen as an intrinsic cognitive process instead of a plain external data exchange protocol. Communication, and more specifically dialog, is an active process, which modifies the agents internal state. It could be directly or indirectly related to a change in agents environment, as any action does when performed. This is why the Speech Acts Theory (Searle, 1969), formerly defended in philosophy, has emigrated toward computational linguistics and cognitive science, to finally provide a proper frame for a new communication language between artificial agents (Smith & Cohen, 1996), especially within agents societies (Pedersen, 2002). However, even though communication has changed status, it has not been totally exploited by those who have promoted Speech Acts based enhancements to agents design. Communication has been examined as a constraints systems preceding action (Mataric, 1997), a set of actions (mostly with performative communication, where any utterance is equivalent to an action (Cerri & Jonquet, 2003)), a set of heuristics for negotiation strategies (Parsons, Sierra, & Jennings, 1998), (Wooldridge & Parsons, 2000). But, seldom its feedback on the agent knowledge base has been considered as a main issue. Some advances have been attempted to tackle it: Negotiation has been recognized as tied to a process of belief revision by (Amgoud & Prade, 2003) (Zhang, Foo, Meyer, & Kwok, 2004), thus acknowledging the role of communication as a part of knowledge processing in artificial agents, mostly as a back up.

On the other hand, a complementary field of AI has been addressing communication issues: Several Human-Machine Interaction researches have fostered interesting models of an ‘intelligent’ communication, i.e, an information exchange in which actions related with knowledge acquisition and update are involved. (Draper & Anderson, 1991) and (Baker, 1994) model dialogs as fundamental elements in human learning, and try to import them into automated tutoring systems (ITS). (Asoh et al., 1996), (Cook, 2000) and (Ravenscroft & Pilkington, 2000), among several others, relate dialog to cognitive actions such as mapping, problem-seeking and investigation by design. All authors tend to emphasize the same point: Dialog supports cognition in human activity, and thus might support it if modeled in an ITS. Cognition is seen in AI as the sum of belief and knowledge acquisition or change, and reasoning. Supporting it in human learning process could be also done in machine learning: The idea that a learning process could be triggered or handled through queries, which are an element of the query-answer basic pattern in dialog, has been long defended by (Angluin, 1987). Strangely, descriptions of cognition do not directly include communication as an intrinsic cognitive process, although this has been pointed out in the more ‘human’ part of cognitive science (i.e. in cognitive psychology) and despite the fact that some twenty years ago, researchers in AI did emphasize the deep relationship between knowledge and its communicative substrate in very famous publications such as (Allen & Perrault, 1980) or (Cohen & Levesque, 1992).

Complete Chapter List

Search this Book:
Reset