Learning Exceptions to Refine a Domain Expertise

Learning Exceptions to Refine a Domain Expertise

Rallou Thomopoulos (INRA/LIRMM, France)
Copyright: © 2009 |Pages: 8
DOI: 10.4018/978-1-60566-010-3.ch175
OnDemand PDF Download:
$37.50

Abstract

This chapter deals with the problem of the cooperation of heterogeneous knowledge for the construction of a domain expertise, and more specifically for the discovery of new unexpected knowledge. Two kinds of knowledge are taken into account: • Expert statements. They constitute generic knowledge which rises from the experience of domain experts and describes commonly admitted mechanisms that govern the domain. This knowledge is represented as conceptual graph rules, which has the advantage to combine a logic-based formalism and an equivalent graphical representation, essential for non-specialist users (Bos, 1997). • Experimental data, given by international literature of the domain. They are represented in the relational model. These numerous data describe in detail, in a quantitative way, experiments that were carried out to deepen the knowledge of the domain, and the obtained results. These results may confirm the knowledge provided by the expert statements – or not. The cooperation of both kinds of knowledge aims, firstly, at testing the validity of the expert statements within the experimental data, secondly, at discovering refinements of the expert statements to consolidate the domain expertise. Two major differences between the two formalisms are the following. Firstly, the conceptual graphs represent knowledge at a more generic level than the relational data. Secondly, the conceptual graph model includes an ontological part (hierarchized vocabulary that constitutes the support of the model), contrary to the relational model. We introduce a process that allows one to test the validity of expert statements within the experimental data, that is, to achieve the querying of a relational database by a system expressed in the conceptual graph formalism. This process is based on the use of annotated conceptual graph patterns. When an expert statement appears not to be valid, a second-step objective is to refine it. This refinement consists of an automatic exception rule learning which provides unexpected knowledge in regard of previously established knowledge. The examples given in this chapter have been designed using the CoGui tool (http://www.lirmm. fr/cogui/) and concern a concrete application in the domain of food quality.
Chapter Preview
Top

Introduction

This chapter deals with the problem of the cooperation of heterogeneous knowledge for the construction of a domain expertise, and more specifically for the discovery of new unexpected knowledge. Two kinds of knowledge are taken into account:

  • Expert statements. They constitute generic knowledge which rises from the experience of domain experts and describes commonly admitted mechanisms that govern the domain. This knowledge is represented as conceptual graph rules, which has the advantage to combine a logic-based formalism and an equivalent graphical representation, essential for non-specialist users (Bos, 1997).

  • Experimental data, given by international literature of the domain. They are represented in the relational model. These numerous data describe in detail, in a quantitative way, experiments that were carried out to deepen the knowledge of the domain, and the obtained results. These results may confirm the knowledge provided by the expert statements – or not.

The cooperation of both kinds of knowledge aims, firstly, at testing the validity of the expert statements within the experimental data, secondly, at discovering refinements of the expert statements to consolidate the domain expertise.

Two major differences between the two formalisms are the following. Firstly, the conceptual graphs represent knowledge at a more generic level than the relational data. Secondly, the conceptual graph model includes an ontological part (hierarchized vocabulary that constitutes the support of the model), contrary to the relational model.

We introduce a process that allows one to test the validity of expert statements within the experimental data, that is, to achieve the querying of a relational database by a system expressed in the conceptual graph formalism. This process is based on the use of annotated conceptual graph patterns. When an expert statement appears not to be valid, a second-step objective is to refine it. This refinement consists of an automatic exception rule learning which provides unexpected knowledge in regard of previously established knowledge.

The examples given in this chapter have been designed using the CoGui tool (http://www.lirmm.fr/cogui/) and concern a concrete application in the domain of food quality.

Top

Background

Handling exceptions is quite an old feature of artificial intelligence (Goodenough, 1975) that has been approached in various directions. In this project, we are concerned with the more specific theme of exception rules. Hussain (2000) explains very well the interest of exceptions as contradictions of common belief. Approaches for finding “interesting” rules are usually classified in two categories (Silberschatz, 1996): objective finding (as in Hussain, 2000), which relies on frequency based criteria and consists of identifying deviations among rules learnt from data, and subjective finding, which relies on belief based criteria and consists of identifying deviations to rules given by the user. Finding “unexpected” rules is part of the second category and can itself be subdivided in syntax based (Liu, 1997; Li, 2007 for a very recent work on sequence mining) and logic based (Padmanabhan, 1998; Padmanabhan, 2006) approaches.

Our approach is related to the latter, and more specifically to first-order rule learning techniques (Mitchell, 1997). However in the above approaches, rule learning is purely data driven and user knowledge is used as a filter, either in post-analysis (Liu, 1997; Sahar, 1999) or in earlier stages (Padmanabhan, 1998, Wang, 2003), whereas we propose to find exception rules by trying variations – refinements – of the forms of the rules given by the experts, using an ontology that has been conceived with this specific purpose. Data are only used for rule verification. This reversed approach is relevant in domains characterized by a relatively high confidence in human expertise, and guarantees the learnt exceptions to be understandable and usable. This advantage is enforced by the graphical representation of the rules, expressed in the conceptual graph model.

Complete Chapter List

Search this Book:
Reset