Improving Agile Methods

Improving Agile Methods

Barbara Russo, Marco Scotto, Alberto Sillitti, Giancarlo Succi
Copyright: © 2010 |Pages: 43
DOI: 10.4018/978-1-59904-681-5.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Apart from personal experience, anecdotal evidence and demonstrations are still the most prevalent and diffused methods on which software engineers have to base their knowledge and decisions. Although – by searching on line databases such as the ACM1 or IEEE2 libraries – we find numerous papers for example on software quality or cost estimation many of them either do not perform any empirical validation at all (they are mostly experience reports or base ideas more on personal opinion than hard data) or the performed validation has limited scientific value
Chapter Preview
Top

12.1 Motivation

Apart from personal experience, anecdotal evidence and demonstrations are still the most prevalent and diffused methods on which software engineers have to base their knowledge and decisions. Although – by searching on line databases such as the ACM1 or IEEE2 libraries – we find numerous papers for example on software quality or cost estimation many of them either do not perform any empirical validation at all (they are mostly experience reports or base ideas more on personal opinion than hard data) or the performed validation has limited scientific value as it exhibits one or more of several drawbacks:

  • Data collected and used for analysis are not characterized properly. In particular, it is not always clear how data have been collected, what is their granularity and what statistical properties (kind of distribution, outliers, etc.) it possesses.

  • Often studies use the same set of rather old and/or artificial data (Mair et al., 2005), thus risking to be biased towards that data sets and not represent current industrial practices and environments.

  • Scarcity of data from industrial settings is a key problem in empirical software engineering and thus experiments most of the times are not replicated by other researchers – limiting their validity.

  • Data are often collected manually by developers or dedicated people (for example quality assurance department) within an organization: A manual data collection process is not only costly (it requires a lot of resources) but also error prone as any human activity. Moreover, if the activity of data collection is not conform with development practices – such as in AMs – there is a high risk that data are biased as developers tend not to collect them at all or to collect data in a way that promotes their personal development practices (Johnson et al., 2003).

Statistical methods are not always used in a correct way by software engineering researchers: Data coming from software engineering environments often are messy and show distributions, which do not allow simple statistical analysis such as ordinary least square regression or Pearson correlation (Meyers & Well, 2003). Moreover, statistical analyses and/or data mining techniques should always define a clear methodology for model selection, accuracy assessment, and predictive performance. For example most of the papers in empirical software engineering use the Magnitude of the Relative Error of the Estimate (MRE) as accuracy indicator and selection criterion for prediction models. However, Myrtveit et al. (1999) show that a model fitted using MRE as error function and which uses at the same time MRE as selection criterion tends to underfit since it fits to some values below the central tendency of the dependent variable. Tichy (1998) analyzes the status of experimentation in Computer Science and concludes that there is a significant lack of experimentation compared to other fields in science. According to him the main reasons for this fact is the cost factor. While it is true that experimentation in software engineering consumes considerable resources and not negligible costs Tichy emphasizes the advantages and values of experimentation:

  • Experimentation can help to build a reliable base of knowledge and thus reduce uncertainty about which theories, methods, and tools are adequate.

  • Observation and experimentation can lead to new, useful, and unexpected insights and open whole new areas of investigation. Experimentation can push into unknown areas where engineering progresses slowly, if at all.

  • Experimentation can accelerate progress by quickly eliminating fruitless approaches, erroneous assumptions, and fads. It also helps orient engineering and theory in promising directions.

Complete Chapter List

Search this Book:
Reset