Article Preview
TopPrevious Work
Boosting is one of the commonly used classifier learning approach (Treptow & Zell, 2004; Freund & Schapire, 1995; Freund & Schapire, 1996; Schapire & Singer, 1998). According to Schapire and Singer (1998) boosting is a method of finding a highly accurate hypothesis by combining many “weak” hypotheses, each of which is only moderately accurate. It manipulates the training examples to generate multiple hypotheses. In each iteration, the learning algorithm uses different weights on the training examples, and it returns a hypothesis ht. The weighted error of ht is computed and applied to update the weights on the training examples. The result of the change in weights is to place more weight on training examples that were misclassified by ht, and less weight on examples that were correctly classified. The final classifier is constructed by weighted vote of the individual classifiers. In the proposed method many weak GA based classifiers are built iteratively. When combined, the weak classifiers form an accurate strong model because ensemble classifiers outperform single classifiers (Chandra & Yao, 2006).
In the GA literature Complexity arising due to Scalability is mainly addressed by parallel processing.( WilsonRivera, 2004; Lopes & Freitas, 1999) For example parallel genetic algorithm proposed by Lopes and Freitas (1999), addresses the scalability issue with respect to GA. It involves multiple processors and the data set is divided into multiple parts (data sets). The multiple data sets are distributed to multiple processors and each processor generates rules for each data set. The rules generated by each processor are again shared by all processors for fitness calculation.
Incremental learning is very popular in making the classification methodologies (Domingos & Hulten, 2000; Spencer & Domingos, 2001; Polikar, Upda, Upda, & Honavar, 2001; Gao, Ding, Fan, Han, & Yu, 2008) scalable and they try to build the model by scanning the data set only once. Only very few methods in the data mining literature employs GA as base algorithm for incremental learning. An incremental GA was proposed by Guan and ZhuCollard (2005), for a dynamic environment which updates the rules based on the new data. Due to the arrival of new data or new attribute or class, the classification model may change. So to deal with this the author proposes an incremental based GA.