Article Preview
TopIntroduction
Usually the software testing is intended for identifying the defect prone artefacts. It is one of the most time intensive and expensive processes in the software product development life cycle. The automatic Software Defect Prediction (SDP) models usually tend to be used for the early identification of bug prone modules. This process helps the quality assurance team to allocate limited resources to the defective modules having with ample bugs (Nam, 2014; Song, Guo, & Shepperd, 2018; Kamei & Shihab, 2016; Li, Jing, & Zhu, 2018). In addition, bug reports can be generated and quality of the product can be ensured. Consequently, a quality product with minimum testing cost is delivered quickly. Here SDP, i.e. Clean/Buggy artefact identification is a data classification problem (Catal & Diri, 2009a). Statistical methods (Nagappan & Ball, 2005) and machine learning techniques (Zhang & Zhang, 2007), (David Gray, Bowes, Davey, Sun, & Christianson, 2009), (Al-Jamimi & Ghouti, 2011) are prominently applied to develop effective defect prediction models.
However, the inherent data characteristics namely limited defect class data (class imbalance), dataset size and the quality of the metrics that are used for defect prediction aggravate the learning problem and hence the poor performance of the SDP models (Catal & Diri, 2009b). Class Imbalance problem is one of the critical problems in Machine Learning, where one or more of the classes of data out numbers the other classes. Consequently, the base learners exhibit poor or no prediction from underrepresented class. So far, the methods that address the class imbalance problem were also employed to device better SDP models (Khoshgoftaar & Seliya, 2002). The methods addresses the class imbalance problem include, Sampling approaches (e.g., Random Under Sampling (RUS), Random Over Sampling (ROS), Synthetic Minority Over-Sampling Technique (SMOTE), Ensemble approaches (e.g., Random Forest and Ada Boost), Cost Sensitive approaches (e.g., cost sensitive neural networks) and hybrids of Ensemble and Cost Sensitive learning approaches (e.g., cost sensitive Boosting). The SDP studies that employ class imbalance methods has shown better results with Random Forest (Ma, Guo, & Cukic, 2006), (Lessmann, Baesens, Mues, & Pietsch, 2008). Further, methods that addresses class imbalance problem in conjunction with feature selection (ranking) are also studied in (Huanjing, Khoshgoftaar, Van Hulse, & Gao, 2011), (Khoshgoftaar, Gao, & Seliya, 2010), (Gao, Khoshgoftaar, & Seliya, 2012), (Gao, Khoshgoftaar, & Napolitano, 2015). Among these adoptions, sampling based boosting techniques such as undersample based boosting (RUSBoost) and oversample-based boosting (SMOTEBoost) (Gao et al., 2015) are employed in conjunction with feature selection methods. This study proved RUSBoost performed better than SMOTEBoost.
The acronyms of the methods used in the study are represented in Table 1.
Table 1. Represents the ACRONYMS of the methods used in the study
Acronyms | Abbreviation |
IG | Information Gain |
HD | Hellinger Distance |
SVM_RFE | Support Vector Machine Recursive Feature Elimination |
DT | Decision Tree |
NN | Neural Networks |
NB | Naive Bayes |
RF | Random Forest |
RUSBoost | Random Undersampling Based Boosting |
WSVMBoost | Weighted SVM Boost |
ROS | Random Oversampling |
RUS | Random Undersampling |
SMOTE | Synthetic Minority Oversampling Technique |
WE | Wilson’s Editing |
OD/FA | Original Data / Full Attributes |