SRGM Decision Model Considering Cost-Reliability

SRGM Decision Model Considering Cost-Reliability

Wenqian Jiang, Ce Zhang, Di Liu, Kaiwei Liu, Zhichao Sun, Jianyuan Wang, Zhongyin Qiu, Weigong Lv
Copyright: © 2022 |Pages: 19
DOI: 10.4018/IJDCF.302873
Article PDF Download
Open access articles are freely available for download

Abstract

Aiming at the current software cost model and optimal release research, which does not fully consider the actual faults in the testing phase, a cost-reliability SRGM evaluation and selection algorithm SESABCRC is proposed. From the perspective of incomplete debugging, introducing new faults, and considering testing effort, the imperfect debugging SRGM is established. The proposed SRGM can be used to describe the testing process of the software through the actual failure data set verification, and is superior to other models. Based on the proposed SRGM, the corresponding cost function is given, which explicitly considers the impact of imperfect debugging on the cost. Furthermore, an optimal release strategy is proposed when given restricted reliability target requirements and when considering the uncertainty that the actual cost may exceed the expected cost. Finally, an experimental example is given to illustrate and verify the optimal publishing problem, and parameter sensitivity analysis is carried out.
Article Preview
Top

Introduction

SRGM (Software Reliability and Growth Model) is an important mathematical tool for modeling and predicting the reliability improvement process in software testing stages (Ahmad, Bokhari, Quadri & Khan, 2008; Dohi, Matsuoka & Osaki, 2002; Zhang, Meng & Wan, 2016). Accurately modeling software reliability and predicting its possible trends are essential to determine the reliability of the entire product (Yamada, 2014;Okamura, Etani & Dohi, 2011; Okamura & Dohi, 2014; Zhang, Meng, Kao, Lü, Liu, Wan, Jiang, Cui & Liu, 2014). The description of SRGM is mainly implemented by establishing a mathematical model describing the testing process and obtaining the expression for the number of failures m(t) of cumulative testing. The study only from m(t), which is used to describe SRGM, is implemented from the perspective of reliability. At the same time, the test cost needs to be considered, that is, the TE (testing effort) needs to be considered. It is closely related to cost (Zhang, Meng, Kao, Lü, Liu, Wan, Jiang, Cui & Liu, 2014). TE describes the consumption of test resources, which can be represented by TEF (testing effort function). The release of software must consider not only the reliability requirements but also the cost factor (Zhang, Cui, Liu, Meng & Fu, 2014; Huang & Lyu, 2005). That is, the software release must consider the comprehensive standard of “cost-reliability”. Therefore, TE has become an important branch of SRGM research and has achieved a series of results.

Counting from the G-O model (Ahmad, Khan & Rafi, 2010) in the late 1970s, SRGM research has spanned two centuries, with a research history of nearly 40 years. Hundreds of related models have been proposed. These results have enriched the connotation of the research, but at the same time, they have also brought difficulties to the evaluation and selection of SRGM. At present, the performance of SRGM is mainly evaluated from the perspective of fitting and prediction, that is, the fit of m(t) to the real historical failure data and the prediction of future failures. For example, in the evaluation of the fitting between the model and historical data, MSE, variation, MEOP, TS, RMS-PE, BMMRE and R-square (Goel & Okumoto, 1979) are often used as metric choices. Among them, the closer the R-square standard is to 1, the better, but other standards are the different (the smaller these standards, the better); RE is used as the model's evaluation standard for future data prediction. The closer the RE is to 0, the better prediction. However, in fact, on different data sets, it is still difficult to find a model that performs well in the abovementioned fitting and prediction standards. In addition, it is difficult and nonquantitative to intuitively and singly judge the performance of the models from the level of these values.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 1 Issue (2023)
Volume 14: 3 Issues (2022)
Volume 13: 6 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing