A Study on Class Imbalancing Feature Selection and Ensembles on Software Reliability Prediction

A Study on Class Imbalancing Feature Selection and Ensembles on Software Reliability Prediction

Jhansi Lakshmi Potharlanka (Vignan's Foundation for Science Technology and Research, Guntur, India), Maruthi Padmaja Turumella (Vignan's Foundation for Science Technology and Research, Guntur, India) and Radha Krishna P. (NIT Warangal, Hanamkonda, India)
Copyright: © 2019 |Pages: 24
DOI: 10.4018/IJOSSP.2019100102

Abstract

Software quality can be improved by early software defect prediction models. However, class imbalance due to under representation of defects and the irrelevant metrics used to predict them are two major challenges that hinder the model performance. This article presents a new two-stage framework of Ensemble of Hybrid Feature selection (EHF) with Weighted Support Vector Machine Boosting (WSVMBoost), which further enhance the model performance. The EHF is the ensemble feature ranking of feature selection models such as filters and embedded models to select the relevant metrics. The classification ensembles, namely Random Forest, RUSBoost, WSVMBoost, and the base learners, namely Decision Tree, and SVM are also explored in this study using five software reliability datasets. From the statistical tests, EHF with WSVMBoost attained best mean rank in terms of performance than the rest of the feature selection hybrids in predicting the software defects. Additionally, this study has shown that both McCabe and Hasalted method level metrics are equally important in improving the model performance.
Article Preview
Top

Introduction

Usually the software testing is intended for identifying the defect prone artefacts. It is one of the most time intensive and expensive processes in the software product development life cycle. The automatic Software Defect Prediction (SDP) models usually tend to be used for the early identification of bug prone modules. This process helps the quality assurance team to allocate limited resources to the defective modules having with ample bugs (Nam, 2014; Song, Guo, & Shepperd, 2018; Kamei & Shihab, 2016; Li, Jing, & Zhu, 2018). In addition, bug reports can be generated and quality of the product can be ensured. Consequently, a quality product with minimum testing cost is delivered quickly. Here SDP, i.e. Clean/Buggy artefact identification is a data classification problem (Catal & Diri, 2009a). Statistical methods (Nagappan & Ball, 2005) and machine learning techniques (Zhang & Zhang, 2007), (David Gray, Bowes, Davey, Sun, & Christianson, 2009), (Al-Jamimi & Ghouti, 2011) are prominently applied to develop effective defect prediction models.

However, the inherent data characteristics namely limited defect class data (class imbalance), dataset size and the quality of the metrics that are used for defect prediction aggravate the learning problem and hence the poor performance of the SDP models (Catal & Diri, 2009b). Class Imbalance problem is one of the critical problems in Machine Learning, where one or more of the classes of data out numbers the other classes. Consequently, the base learners exhibit poor or no prediction from underrepresented class. So far, the methods that address the class imbalance problem were also employed to device better SDP models (Khoshgoftaar & Seliya, 2002). The methods addresses the class imbalance problem include, Sampling approaches (e.g., Random Under Sampling (RUS), Random Over Sampling (ROS), Synthetic Minority Over-Sampling Technique (SMOTE), Ensemble approaches (e.g., Random Forest and Ada Boost), Cost Sensitive approaches (e.g., cost sensitive neural networks) and hybrids of Ensemble and Cost Sensitive learning approaches (e.g., cost sensitive Boosting). The SDP studies that employ class imbalance methods has shown better results with Random Forest (Ma, Guo, & Cukic, 2006), (Lessmann, Baesens, Mues, & Pietsch, 2008). Further, methods that addresses class imbalance problem in conjunction with feature selection (ranking) are also studied in (Huanjing, Khoshgoftaar, Van Hulse, & Gao, 2011), (Khoshgoftaar, Gao, & Seliya, 2010), (Gao, Khoshgoftaar, & Seliya, 2012), (Gao, Khoshgoftaar, & Napolitano, 2015). Among these adoptions, sampling based boosting techniques such as undersample based boosting (RUSBoost) and oversample-based boosting (SMOTEBoost) (Gao et al., 2015) are employed in conjunction with feature selection methods. This study proved RUSBoost performed better than SMOTEBoost.

The acronyms of the methods used in the study are represented in Table 1.

Table 1.
Represents the ACRONYMS of the methods used in the study
AcronymsAbbreviation
IGInformation Gain
HDHellinger Distance
SVM_RFESupport Vector Machine Recursive Feature Elimination
DTDecision Tree
NNNeural Networks
NBNaive Bayes
RFRandom Forest
RUSBoostRandom Undersampling Based Boosting
WSVMBoostWeighted SVM Boost
ROSRandom Oversampling
RUSRandom Undersampling
SMOTESynthetic Minority Oversampling Technique
WEWilson’s Editing
OD/FAOriginal Data / Full Attributes

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 11: 4 Issues (2020): Forthcoming, Available for Pre-Order
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 1 Issue (2015)
Volume 5: 3 Issues (2014)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing