Selection of High Quality Rules in Associative Classification

Selection of High Quality Rules in Associative Classification

Silvia Chiusano (Politecnico di Torino, Italy) and Paolo Garza (Politecnico di Torino, Italy)
DOI: 10.4018/978-1-60566-404-0.ch010
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

In this chapter the authors make a comparative study of five well-known classification rule pruning methods with the aim of understanding their theoretical foundations and their impact on classification accuracy, percentage of pruned rules, model size, and number of wrongly and not classified data. Moreover, they analyze the characteristics of both the selected and pruned rule sets in terms of information content. A large set of experiments has been run in order to empirically evaluate the effect of the pruning methods when applied individually as well as when combined.
Chapter Preview
Top

Introduction

Given a set of labeled data objects, denoted as training data set, classification rule mining aims at discovering a small set of rules to form a model, the classifier, which is then used to classify new data with unknown class label (Quinlan, 1993). The classification accuracy is evaluated on a different data set, the test data set.

Recently, associative classification methods have been proposed, which exploit association rule mining techniques to generate classification models for structured data (Arunasalam & Chawla, 2006; Baralis & Paolo, 2002; Baralis, Chiusano, & Garza, 2008; Dong, Zhang, Wong, & Li, 1999; Li, Han, & Pei, 2001; Liu, Hsu, & Ma, 1998; Liu, Ma, & Wong, 2000; Thabtah, Cowling, & Peng, 2004; Wang & Karypis, 2005; Wang, Zhou, & He, 2000; Yin & Han, 2003). Associative classification generates highly accurate classifiers, since, differently from decision trees (Quinlan, 1993), it considers the simultaneous correspondence of values of different attributes.

The generation of an associative classifier usually consists of two steps. First, a large set of classification rules is extracted from the training data to allow a wide selection of rules and the generation of accurate classifiers. Then, pruning techniques are applied to select a small subset of high quality rules and build an accurate model of the training data.

Similarly to decision tree based approaches, associative classification exploits pruning techniques (a) to deal with the problem of combinatorial explosion of the solution set and (b) to improve the accuracy in classification. While a pruning technique in case (a) aims at reducing the size of the rule set without affecting its accuracy, in case (b) pruning aims at removing those rules that can decrease classification accuracy, because introducing noise and overfitting.

So far, associative classification mainly exploited ad hoc versions of pruning methods previously proposed in decision tree based classification (Esposito, Malerba, & Semeraro, 1997) or high quality measures proposed for association rule selection (Tan, Kumar, & Srivastava, 2002). Significant improvements in classification accuracy have been obtained by slight modifications of the same pruning methods, and different combinations of them (Arunasalam & Chawla, 2006; Baralis & Paolo, 2002; Baralis, Chiusano, & Garza, 2008; Dong, Zhang, Wong, & Li, 1999; Li, Han, & Pei, 2001; Liu, Hsu, & Ma, 1998; Liu, Ma, & Wong, 2000; Thabtah, Cowling, & Peng, 2004; Wang & Karypis, 2005; Wang, Zhou, & He, 2000; Yin & Han, 2003). However, to the best of our knowledge, the specific contribution of the different techniques has never been investigated. Thabtah (2007) surveys the existing associative classification methods and the pruning techniques used by them. Differently from (Thabtah, 2007), this chapter focuses on the pruning methods used in associative classification, and deeply investigates their theoretical properties and experimentally evaluates their effects. Because of the increasing relevance of associative classification, we believe that a systematic and experimental analysis of the effects of rule pruning in this context can be beneficial. A similar analysis concerning the effects of pruning in decision tree based classification was presented in (Esposito, Malerba, & Semeraro, 1997).

Complete Chapter List

Search this Book:
Reset