Testing-Effort Dependent Software Reliability Model for Distributed Systems

Testing-Effort Dependent Software Reliability Model for Distributed Systems

Omar Shatnawi
Copyright: © 2013 |Pages: 14
DOI: 10.4018/jdst.2013040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Distributed systems are being developed in the context of the client-server architecture. Client-server architectures dominate the landscape of computer-based systems. Client-server systems are developed using the classical software engineering activities. Developing distributed systems is an activity that consumes time and resources. Even if the degree of automation of software development activities increased, resources are an important limitation. Reusability is widely believed to be a key direction to improving software development productivity and quality. Software metrics are needed to identify the place where resources are needed; they are an extremely important source of information for decision making. In this paper, an attempt has been made to describe the relationship between the calendar time, the fault removal process and the testing-effort consumption in a distributed development environment. Software fault removal phenomena and testing-effort expenditures are described by a non-homogenous Poisson process (NHPP) and testing-effort curves respectively. Actual software reliability data cited in literature have been used to demonstrate the proposed model. The results are fairly encouraging.
Article Preview
Top

Introduction

The software development environment is changing from a host-concentrated one to a distributed one due to cost and quality aspects and rapid growth in network computing technologies. Distributed systems are being developed in the context of the client-server architecture. Client-server architectures dominate the landscape of computer-based systems. Everything from automatic teller networks to the Internet exists because software residing on one computer—the client—requests services and/or data from another computer—the server. Client-server software engineering blends conventional principles, concepts, and methods with element of object-oriented and computer-based software engineering to create client-server systems. Client-server systems are developed using the classical software engineering activities.

Distributed systems are growing rapidly in response to the improvement of computer hardware and software and this is matched by the evolution of the technologies involved (Zhao et al., 2010). Developing large-scale distributed systems (LSDS) is complex, time-consuming, and expensive. LSDS developments are now common in air traffic control, telecommunications, defense and space. In these systems a release will often be developed over 2-4 years and cost in excess of 200-300 person years of development effort (Kapur et al., 2004b). Due to their complexity, LSDS are hardly ever “perfect” (Lavinia et al., 2011). Features define the content of a release. The features are realized by mapping the full set of system feature requirements across the various components. Normally this involves building on a large existing software base made up of the existing components that are then modified and extended to engineer the new release. Requirement changes are common in these developments because of the long lead times involved. Change is inevitable since user requirements, component interfaces, and developers' understanding of their application domain all change. This adds to the complexity of planning and controlling the in-progress development both at the individual component and the release. A further complexity factor is the integration and validation of the release once the individual components are delivered. These large-scale systems must achieve high reliability because of their nature. In the final release validation software defects arise not only from the new software but also due to latent defects in the existing code. Substantial regression tests must be run to checkout the existing software base that may amount to millions of lines of code (Kapur et al., 2004b).

Successful operation of any computer system depends largely on its software components. Thus, it is very important to ensure the quality of the underlying software in the sense that it performs its functions that it is designed and built for. To express the quality of the software to the end users, some objective attributes such as reliability and availability should be measured. Software reliability is the most dynamic quality attribute (metric), which can measure and predict the operational quality of the software. Software reliability model (SRM) is the tool, which can be used to evaluate the software quantitatively, develop test cases, schedule status and monitor the change in reliability performance. In particular, SRMs that describe software failure occurrence or fault removal phenomenon in the system testing phase are called software reliability growth models (SRGMs). Among others, non-homogeneous Poisson process (NHPP) models can be easily applied in the actual software development.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 2 Issues (2023)
Volume 13: 8 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing