A Hybrid Learning Framework for Imbalanced Classification

A Hybrid Learning Framework for Imbalanced Classification

Eric P. Jiang
Copyright: © 2022 |Pages: 15
DOI: 10.4018/IJIIT.306967
Article PDF Download
Open access articles are freely available for download

Abstract

Class imbalance is a well-known and challenging algorithmic research topic among the machine learning community as traditional classifiers generally perform poorly on imbalanced problems, where data to be learned have skewed distributions between their classes. This paper presents a hybrid framework named PRUSBoost for learning imbalanced classification. It combines a selective data under-sampling procedure and a powerful boosting strategy to effectively enhance classification performance on imbalanced problems. Different from the simple random under sampling algorithm, this framework constructs the training data of the majority or negative class by using a newly developed partition based under sampling approach. Experiments on several datasets from different application domains that carry skewed class distributions have shown that the proposed framework provides a very competitive, consistent, and effective solution to imbalanced classification problems.
Article Preview
Top

Introduction

Class imbalance is a well-known and challenging problem among the machine learning community. It refers to the applications where data from different classes are noticeably unevenly distributed. For a binary classification problem, it means that samples from one class (that is usually called the majority or negative class) significantly outnumbers those from the other (that is named the positive or minority class). Traditional classification algorithms generally fail to work adequately with skewed class distribution problems. They are designed to generalize from sample data and produce the simplest hypothesis that best fits the data. This learning principle is represented as the inductive bias of some machine learning algorithms such as decision trees, which prefer small trees over large ones (Akabani et al., 2004). As a result of this, given an unbalanced data set, they often generate the hypothesis that classifies almost all its samples as negative.

Clearly, such a hypothesis can simply be useless in practice. For instance, assume we need to build a model with a data set that contains customer transaction records and among them, there are only a very tiny portion of transactions are confirmed fraudulent and the rest of transactions are deemed as normal. To protect customers and their financial assets, we are most interested in detecting successfully as many fraudulent activities as possible. When facing this kind of real-world scenarios, the generated hypothesis described above obviously could not achieve the desired outcome. In order to simply the discussion, this paper focuses only on binary classification problems.

For the class imbalance problem, the degree of imbalance between classes may not be the only issue that hinders learning. Several research papers (He & Garcia, 2009) (Galar et al., 2012) have pointed out that data complexity would be the primary factor of classification performance deterioration, which is in fact intensified by the added skewed class distribution. More specifically, data complexity comprises the issues such as class overlapping (which makes discriminative rules hard to induce), lack of representative data, small disjuncts (which leads to underrepresented sub-concepts) and all of these issues contribute to performance degradation.

Over the past decades, several approaches have been proposed to address the challenges of imbalanced classification (Krawczyk, 2016). Some of them either aim to shift an inductive bias towards the positive class or apply a data preprocessing procedure to reduce the potentially undesirable impact of class imbalance on model building. Some other approaches assume higher misclassification costs for samples in the positive class and seek to minimize the higher misclassification errors in learning. Furthermore, several modifications or extensions of ensemble algorithms are recently adapted for imbalanced modeling, by embedding data preprocessing before applying each base learner or by integrating a cost-sensitive strategy in the ensemble learning process. We will discuss these different approaches in detail in the next section.

In this paper, we present a new hybrid learning framework called PRUSBoost for imbalanced classification. It applies a newly developed a partition-based data under-sampling strategy and integrates it into the AdaBoost algorithm (Freund & Schapire, 1996). It aims to provide a unified framework where we can informatively select some negative samples that exhibit mainstream characteristics of the class and also some negative samples that reveal significantly less typical features of the class and then combine them with the available positive samples to form a well-representative and balanced data set for training. This data selection process can be particularly helpful in the presence of data noises and class over-lapping regions in the data space. Once the training data samples are constructed, we further enhance the learning by building an ensemble of classifiers in hope of capturing most of the important underlying negative data patterns while learning most of the unique positive data features through an iterative process. The proposed framework can be considered as a general under-sampling approach that includes the well-known RUSBoost method (Steiffert et al., 2010) as a special case. Experiments on several data sets with various imbalance ratios indicate that the framework represents a very competitive and efficient alternative to handling imbalanced classification problems.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing