Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore.

Additionally, libraries can receive an extra 5% discount. Learn More

Additionally, libraries can receive an extra 5% discount. Learn More

Lefteris Angelis (Aristotle University of Thessaloniki, Greece), Nikolaos Mittas (Aristotle University of Thessaloniki, Greece) and Panagiota Chatzipetrou (Aristotle University of Thessaloniki, Greece)

Copyright: © 2015
|Pages: 27

DOI: 10.4018/978-1-4666-6359-6.ch003

Chapter Preview

TopSoftware has become the key element of any computer-based system and product. The complicated structure of software and the continuously increasing demand for quality products justify the high importance of software engineering in today’s world as it offers a systematic framework for development and maintenance of software. One of the most important activities in the initial project phases is Software Cost Estimation (SCE). During this stage a software project manager attempts to estimate the effort and time required for the development of a software product. The importance of software engineering and the role of cost estimation in software project planning has been discussed widely in literature. (Jorgensen & Shepperd 2007). Cost estimations may be performed before, during or even after the development of software.

The complicated nature of a software project and therefore the difficult problems involved in the SCE procedures emerged a whole area of research within the wider field of software engineering. A substantial part of the research on SCE concerns the construction of software cost estimation models. These models are built by applying statistical methodologies to historical datasets which contain attributes of finished software projects. The scope of cost estimation models is twofold: first, they can provide a theoretical framework for describing and interpreting the dependencies of cost with the characteristics of the project and second they can be utilized to produce efficient cost predictions. Although the second utility is the most important for practical purposes, the first utility is equally significant, since it provides a basis for thorough studies of how the various project attributes interact and affect the cost. Therefore, the cost models are valuable not only to practitioners but also to researchers whose work is to analyse and interpret.

In the process of constructing cost models, a major problem arises from the fact that missing values are often encountered in some historical datasets. Very often missing data are responsible for the misleading results regarding the accuracy of the cost models and may reduce their explanatory and prediction ability. The aforementioned problem is very important in the area of software project management because most of the software databases suffer from missing values and this can happen for several reasons.

A common reason is the cost and the difficulties that some companies face in the collection of the data. In some cases, the cost of money and time needed to collect certain information is forbidding for a company or an organization. In other cases, the collection of data is very difficult because it demands consistence, experience, time and methodology for a company. An additional source of incomplete values is the fact that data are often collected with a different purpose in mind, or that the measurement categories are generic and thus not applicable to all projects. This seems especially likely when data are collected from a number of companies. So, for researchers whose purpose is to study projects from different companies and build cost models on them, the handling of missing data is an essential preliminary step (Chen, Boehm, Menzies & Port 2005).

Many techniques deal with missing data. The most common and straightforward one is *Listwise Deletion* (LD), which simply ignores the projects with missing values. The major advantage of the method is its simplicity and the ability to do statistical calculations on a common sample base of cases. The disadvantages of the method are the dramatic loss of information in cases with high percentages of missing values and possible bias in the data. These problems can occur when there is some type of pattern in the missing data, i.e. when the distribution of missing values in some variables is depended on certain valid observations of other variables in the data.

Other techniques estimate or “impute” the missing values. The resulting complete data can then be analyzed and modelled by standard methods (for example regression analysis). These methods are called *imputation methods*. The problem is that most of the imputation methods produce continuous estimates, which are not realistic replacements of the missing values when the variables are categorical. Since the majority of the variables in the software datasets are categorical with many missing values, it is reasonable to use an imputation method producing categorical values in order to fill the incomplete dataset and then to use it for constructing a prediction model.

Missing Data Techniques: These are statistical methods that have been developed in order to deal with the problem of missing data. These methods involve deletion of cases or variables and imputation methods.

Regression Receiver Operating Curves (RROC): RROC’s represent the predictive power of alternative models on a two-dimensional plot providing easily-interpretable information.

Software Cost Estimation: The process of predicting the cost, in terms of effort or time, required to develop or maintain a software product. Software cost estimation is usually based on incomplete and noisy data, requiring statistical analysis and modeling.

Imputation: Imputation is the estimation of missing values by statistical techniques. The missing values are filled in and the resultant completed dataset can be analysed by standard methods.

Missing Values: A Missing value occurs when no data value is stored for the variable in the current observation.

Regression Error Characteristic (REC) Curves: A REC curve is a two-dimensional graph for visualization of the prediction error of a model. The horizontal axis represents the error tolerance and the vertical axis represents the accuracy. Accuracy is defined as the percentage of cases that are predicted within the error tolerance.

Statistical Tests: Methods for making decisions using empirical data. Statistical tests are based on probability theory and especially on probability distributions in order to make an inference on whether a specific result is significant, in the sense that it is unlikely to have occurred by chance.

Resampling Methods: These are statistical methods based on drawing new samples from an original sample of data in order to reconstruct the distribution of the initial population where the sample came from. They are used for various procedures, for example for computing confidence intervals and for making statistical tests. Common resampling techniques include bootstrap, jackknife and permutation tests.

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved