Article Preview
Top1. Introduction
Instance reduction is a crucial task in instance-based cognitive machine learning algorithms. Instance-based learning algorithms use the whole prototypes from the training set to construct inference structures. The k-Nearest Neighbor Rule (KNN) (Cover & Hart, 1967) is a well-known example of instance-based learning algorithms. It requires a large memory space since the whole training set has to be stored which may be an excessive amount of storage for large dataset and leads to a large computation time in the classification stage.
To overcome the shortages, the Condensing Nearest Neighbor (CNN) (Hart, 1968) algorithm is proposed by Hart to reduce the number of training set by using the nearest neighbor decision. It removes any well classified instance using the nearest neighbor rule and obtains a consistent subset that does not affect the classification accuracy of the whole training set. The basic idea is that patterns located in decision boundary are crucial to the classification, but those far from the boundary have little effect. The Edited Nearest Neighbor (ENN) (Wilson, 1972) rule is another attempt of instance reduction, it aims to remove noisy patterns that are not correctly classified by their nearest neighbors. Moreover, hybrid methods are often used to compute a subset of the training set combining the characteristics of edition and condensation methods, and is widely used in the field of instance reduction problems. Usually, Edition method has been used in various condensation methods as a noisy pre-processing filter to eliminate the noisy patterns.
CNN is classic reduction algorithm, but it is very sensitive to noisy patterns and it may keep some patterns that far from the decision boundary. In recent years, some new reduction techniques are developed to overcome drawbacks and improve the performance. In 2007, Angiulli (Angiulli, 2007) introduced a novel algorithm, called Fast Nearest Neighbor Condensation (FCNN) rule which computing a training-set-consistent subset for the nearest neighbor decision rule and can deal with large data classification. Fayed and Atuya (Fayed & Atiya, 2009) presented a sample and effective prototype reduction algorithm, namely, the template reduction for KNN (TRKNN). The basic idea is based on defining the so-called chain which is a sequence of nearest neighbors and set a cutoff value for distances to obtain the patterns close to the classification boundary as the “condensed set”. Nikolaidis (Nikolaidis, Goulermas, & Wu, 2011) proposed a class boundary preserving algorithm (CBP) which uses the reachable set to locate the border points, then prunes the training set. Later, Nikolaidis (Nikolaidis, Rodriguez-Martinez, Goulermas, & Wu, 2012) introduces spectral instance reduction (SIR) algorithm to partition the dataset into border and internal instances. In 2015, the binary nearest neighbor tree algorithm (BNNT) (Li & Wang, 2015) is presented to obtain the prototype set through selecting and generating prototypes from the binary nearest neighbor tree. Two strategies are used to select and generate prototype. Those selected patterns have different class labels in a tree. When the tree locates in a class interior, the centroid pattern is generated to replace these tree nodes.
The mentioned algorithms above are to keep the patterns have higher contribution for pattern classification and remove the vast number of inner patterns and all of outlier patterns. There is another strategy to lower the high computation cost and storage requirements when dealing with the large datasets, which utilizes some partitioning strategies to divide the whole space into several smaller spaces. The typical method is cluster-based prototype reduction algorithm (Lumini & Nanni, 2006; Mollineda, Ferri, & Vidal, 2002). An example is the Prototype Selection by Clustering (PSC) (Olvera-López, Carrasco-Ochoa, & Martínez-Trinidad, 2010) that divide the training set into many clusters and analyzes nonhomogeneous clusters to find the border instances. Instance Reduction Algorithm using Hyperrectangle Clustering (IRAHC) (Hamidzadeh, Monsefi, & Sadoghi Yazdi, 2015) is the second example. The different between PSC and IRAHC is different cluster methods and IRAHC uses the mean of the interior instance as its representative.