Context and Explanation in e-Collaborative Work

Context and Explanation in e-Collaborative Work

Patrick Brézillon
DOI: 10.4018/978-1-60960-040-2.ch016
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In a face-to-face collaboration, participants use a large part of contextual information to translate, interpret and understand others’ utterances by using contextual cues like mimics, voice modulation, movement of a hand, etc. Such a shared context constitutes the collaboration space of the virtual community. Explanation generation, one the one hand, allows to reinforce the shared context, and, in the other hand, relies on the existing shared context. The situation is more critical in e-collaboration than in face-to-face collaboration because new contextual cues are to be used. This chapter presents the interests of making explicit context and explanation generation in e-collaboration and which types of new paradigms exist then.
Chapter Preview
Top

Introduction

An important challenge for virtual communities is the development of new means for interaction, especially in collaborative work. Any collaboration supposes that each participant understands how others make a decision and the steps of their reasoning to reach the decision. In a face-to-face collaboration, participants use a large part of contextual information to translate, interpret and understand others’ utterances by using contextual cues like mimics, voice modulation, movement of a hand, etc. All these contextual elements are essential in the determination of a shared context among virtual-community members, a shared context that constitutes the collaboration space of the virtual community. Explanation generation, which relies heavily on contextual cues (Karsenty and Brézillon, 1995), would play a role in e-collaboration more important than in face-to-face collaboration.

Twenty years ago, Artificial Intelligence was considered as the science of explanation (Kodratoff, 1987). However, few concrete results can be reused from that time (e.g. see PRC-GDR, 1990). There are several reasons for that. The first point concerns expert systems (and knowledge-based systems after) themselves and their past failures (Brézillon and Pomerol, 1997).

There was an exclusion of the human expert providing the knowledge for feeding the expert systems. The “interface” was the knowledge engineer asking the expert “If you face this problem, which solution do you propose?” The expert generally answered something like “Well, in the context A, I will consider this solution,” but the knowledge engineer only retained the pair {problem, solution} and forgot the initial triple {problem, context, solution} provided by the expert. The reason was to generalize in order to cover a large class of similar problems when the expert was giving a local solution in a specific context. Now, we know that a system needs to acquire knowledge and its context of use.

On the opposite side, the user was excluded from the noble part of the problem solving because all the expert knowledge was supposed to be in the machine: the machine was considered as the oracle and the user as a novice (Karsenty and Brézillon, 1995). Thus, explanations aimed to convince the user of the rationale used by the machine without respect to what the user knew or wanted to know. Now, we know that we need of a user-centered approach (Brézillon, 2003).

Capturing the knowledge from the expert, it was supposed to put all the needed knowledge in the machine, prior to the use of the system. However, one knows that the exception is rather the norm in expert diagnosis. Thus, the system was able to solve 80% of the most common problems, on which users did not need explanations and nothing about the 20% that users did not understand. Now, we know that systems must be able to acquire incrementally knowledge with its context of use in order to address more specific problems of users.

Systems were unable to generate relevant explanations because they did not consider what the user’s question was really, and in which context the question was asked. The request for an explanation was analyzed on the basis of the available information to the system. Now, we know that the system must understand the user’s question and after build jointly with the user the answer.

Thus, the three key lessons learned are: (1) KM (i.e. knowledge management normally) stands for management of the knowledge in its context; (2) any collaboration needs a user-centered approach; and (3) an intelligent system must incrementally acquire new knowledge and learns corresponding new practices. We present in (Brézillon, 2007) and (Brézillon and Brézillon, 2007) a context-based formalism for explaining concretely the differences often cited but never clearly identified between prescribed and effective tasks (Leplat and Hoc, 1983), procedures and practices (Brézillon, 2005), logic of functioning and logic of use (Richard, 1983).

Focusing on explanation generation, it appears that a context-based formalism for representing knowledge and reasoning allows the introduction of the end-user in the loop of the system development and the possibility for generating new types of explanations. Moreover, such formalism allows a uniform representation of elements of knowledge, of reasoning and of contexts.

Complete Chapter List

Search this Book:
Reset