Evaluation of Decision Rules by Qualities for Decision-Making Systems

Evaluation of Decision Rules by Qualities for Decision-Making Systems

Ivan Bruha (McMaster University, Canada)
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-60566-010-3.ch123
OnDemand PDF Download:
$37.50

Abstract

A ‘traditional’ learning algorithm that can induce a set of decision rules usually represents a robust and comprehensive system that discovers a knowledge from usually large datasets. We call this discipline Data Mining (DM). Any classifier, expert system, or generally a decision-supporting system can then utilize this decision set to derive a decision (prediction) about given problems, observations, diagnostics. DM can be defined as a nontrivial process of identifying valid, novel, and ultimately understandable knowledge in data. It is understood that DM as a multidisciplinary activity points to the overall process of determining a useful knowledge from databases, i.e. extracting highlevel knowledge from low-level data in the context of large databases. A rule-inducing learning algorithm may yield either an ordered or unordered set of decision rules. The latter seems to be more understandable by humans and directly applicable in most expert systems or decisionsupporting ones. However, classification utilizing the unordered-mode decision rules may be accompanied by some conflict situations, particularly when several rules belonging to different classes match (‘fire’ for) an input to-be-classified (unseen) object. One of the possible solutions to this conflict is to associate each decision rule induced by a learning algorithm with a numerical factor which is commonly called the rule quality. The chapter first surveys empirical and statistical formulas of the rule quality and compares their characteristics. Statistical tools such as contingency tables, rule consistency, completeness, quality, measures of association, measures of agreement are introduced as suitable vehicles for depicting a behaviour of a decision rule. After that, a very brief theoretical methodology for defining rule qualities is acquainted. The chapter then concludes by analysis of the formulas for rule qualities, and exhibits a list of future trends in this discipline.
Chapter Preview
Top

Background

Machine Learning (ML) or Data Mining (DM) utilize several paradigms for extracting a knowledge that can be then exploited as a decision scenario (architecture) within an expert system, classification (prediction) one, or any decision-supporting one. One commonly used paradigm in Machine Learning is divide-and-conquer that induces decision trees (Quinlan, 1994). Another widely used covering paradigm generates sets of decision rules, e.g., the CNx family (Clark & Boswell, 1991; Bruha, 1997), C4.5Rules and Ripper. However, the rule-based classification systems are faced by an important deficiency that is to be solved in order to improve the predictive power of such systems; this issue is discussed in the next section.

Also, it should be mentioned that they are two types of agents in the multistrategy decision-supporting architecture. The simpler one yields a single decision; the more sophisticated one induces a list of several decisions. In both types, each decision should be accompanied by the agent’s confidence (belief) into it. These functional measurements are mostly supported by statistical analysis that is based on both the certainty (accuracy, predictability) of the agent itself as well as consistency of its decision. There have been quite a few research enquiries to define formally such statistics; some, however, have yielded in quite complex and hardly enumerable formulas so that they have never been used.

One of the possible solutions to solve the above problems is to associate each decision rule induced by a learning algorithm with a numerical factor: a rule quality. The issue fo the rule quality was discussed in many papers; here we introduce just the most essential ones: (Bergadano et al., 1988; Mingers, 1989) were evidently one of the first papers introducing this problematic. (Kononenko, 1992; Bruha, 1997) were the followers; particularly the latter paper presented a methodological insight to this discipline. (An & Cercone, 2001) just extended some of the techniques introduces by (Bruha,1997). (Tkadlec & Bruha, 2003) presents a theoretical methodology and general definitions of the notions of a Designer, Learner, and Classifier in a formal matter, including parameters that are usually attached to these concepts such as rule consistency, completeness, quality, matching rate, etc. That paper also provides the minimum-requirement definitions as necessary conditions for the above concepts. Any designer (decision-system builder) of a new multiple-rule system may start with these minimum requirements.

Complete Chapter List

Search this Book:
Reset