Article Preview
Top1. Introduction
Software with high quality which meets the user’s needs, requirements and performs as per the expectation, are tempted in the entire world since always. The Software must ensure the failure-free performance; that is the Reliability of product. So many quality attributes and metrics with numerous Quality Assurance techniques are developed, but still the question: how to ensure that the resulting product will possess good quality is an open problem. The early detection of failure-prone modules directly correlates with the quality of end-product. The fault prediction involves the early detection of those “risky” modules of the software which prone to errors, impair the quality and will surely incur heavy development (testing) and maintenance cost. The early detection of faulty (buggy) modules improves the effectiveness of quality enhancement activities and ultimately improves the overall quality. The software fault prediction is used as an indicator for software quality for two reasons: (1) Quality is inversely proportional to the failures, which in turn caused by faults (development anomalies). To ensure high quality, failures must be minimized to zero. It can be achieved only by 100% detection and removal of defects from the modules. Hence, the accuracy in defect prediction is the most crucial factor to judge the quality of software product. (2) The early fault detection provides decisive power to the entire development team to strategically allocate the testing resources. Quality improvement activities can intensively be applied to the detected risky modules (Kan, 2002). Scheduling can be done more effectively to avoid delays. Prioritization of failure-prone modules for testing can be done. Ultimately, the quality can be improved and ensured by prediction of faults in early phases of development cycle. In case, faulty modules are not detected in early development phases, then the cost of getting the defect fixed increases multifold. Along with the increased cost, the chances of getting the defect detected by the customer in the live environment also increase. In situ, defect is found in the live environment, may stop the operational procedures which can eventually result in fatal consequences too. Software industry already witnessed such failures like NASA Mass Climate Orbiter (MCO) spacecraft worth $125 million lost in the space due to small data conversion bug (NASA, 2015).
Till today, so many researchers have contributed in this problem domain, but the gap is that their results are not univocal. Their work is not generalized but biased to the specific datasets or the techniques used to solve the problem.
Hence, the more accurate fault prediction is done, the more precise quality prediction is achieved. The present work is focused on the following research goals:
R1: To transform the software quality prediction problem as a learning problem (classification problem).
R2: To create ML prediction models using static code metrics as predictor.
R3: To evaluate the accuracy of prediction models empirically.
R4: To find which ML technique outperforms other ML techniques.
The major contribution of this work, is to develop and validate a cross-platform generalized model, to accurately predict the faulty modules in the software during development, so that the defects cannot propagate to the final phases and ultimately, the quality of software can be improved. As the quality of the software is inversely proportional to the number of defects in the end-product (Kan, 2002).
In this Paper; (1) The Software quality prediction problem is formulated as a two-class classification problem. Following this approach, a few features (attributes) from the previous project dataset are used as predictors and the quality of the module is generated as response by the classifier in terms of the predicted class label. (2) In total, 30 models are built for software quality prediction using 5 ML techniques (ANN, SVM, Naïve-Bayes classifier, DT, and k-nearest neighbor) over 6 datasets. Each model is validated using 10-fold cross-validation. (3) An empirical comparison among the developed prediction models is made using ROC, AUC, and accuracy as performance evaluation criteria.