Article Preview
TopIntroduction
Class imbalance is a well-known and challenging problem among the machine learning community. It refers to the applications where data from different classes are noticeably unevenly distributed. For a binary classification problem, it means that samples from one class (that is usually called the majority or negative class) significantly outnumbers those from the other (that is named the positive or minority class). Traditional classification algorithms generally fail to work adequately with skewed class distribution problems. They are designed to generalize from sample data and produce the simplest hypothesis that best fits the data. This learning principle is represented as the inductive bias of some machine learning algorithms such as decision trees, which prefer small trees over large ones (Akabani et al., 2004). As a result of this, given an unbalanced data set, they often generate the hypothesis that classifies almost all its samples as negative.
Clearly, such a hypothesis can simply be useless in practice. For instance, assume we need to build a model with a data set that contains customer transaction records and among them, there are only a very tiny portion of transactions are confirmed fraudulent and the rest of transactions are deemed as normal. To protect customers and their financial assets, we are most interested in detecting successfully as many fraudulent activities as possible. When facing this kind of real-world scenarios, the generated hypothesis described above obviously could not achieve the desired outcome. In order to simply the discussion, this paper focuses only on binary classification problems.
For the class imbalance problem, the degree of imbalance between classes may not be the only issue that hinders learning. Several research papers (He & Garcia, 2009) (Galar et al., 2012) have pointed out that data complexity would be the primary factor of classification performance deterioration, which is in fact intensified by the added skewed class distribution. More specifically, data complexity comprises the issues such as class overlapping (which makes discriminative rules hard to induce), lack of representative data, small disjuncts (which leads to underrepresented sub-concepts) and all of these issues contribute to performance degradation.
Over the past decades, several approaches have been proposed to address the challenges of imbalanced classification (Krawczyk, 2016). Some of them either aim to shift an inductive bias towards the positive class or apply a data preprocessing procedure to reduce the potentially undesirable impact of class imbalance on model building. Some other approaches assume higher misclassification costs for samples in the positive class and seek to minimize the higher misclassification errors in learning. Furthermore, several modifications or extensions of ensemble algorithms are recently adapted for imbalanced modeling, by embedding data preprocessing before applying each base learner or by integrating a cost-sensitive strategy in the ensemble learning process. We will discuss these different approaches in detail in the next section.
In this paper, we present a new hybrid learning framework called PRUSBoost for imbalanced classification. It applies a newly developed a partition-based data under-sampling strategy and integrates it into the AdaBoost algorithm (Freund & Schapire, 1996). It aims to provide a unified framework where we can informatively select some negative samples that exhibit mainstream characteristics of the class and also some negative samples that reveal significantly less typical features of the class and then combine them with the available positive samples to form a well-representative and balanced data set for training. This data selection process can be particularly helpful in the presence of data noises and class over-lapping regions in the data space. Once the training data samples are constructed, we further enhance the learning by building an ensemble of classifiers in hope of capturing most of the important underlying negative data patterns while learning most of the unique positive data features through an iterative process. The proposed framework can be considered as a general under-sampling approach that includes the well-known RUSBoost method (Steiffert et al., 2010) as a special case. Experiments on several data sets with various imbalance ratios indicate that the framework represents a very competitive and efficient alternative to handling imbalanced classification problems.