A Systematic Approach to Evaluating Open Source Software

A Systematic Approach to Evaluating Open Source Software

Norita Ahmad, Phillip A. Laplante
Copyright: © 2013 |Pages: 20
DOI: 10.4018/978-1-4666-2782-6.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Selecting appropriate Open Source Software (OSS) for a given problem or a set of requirements can be very challenging. Some of the difficulties are due to the fact that there is not a generally accepted set of criteria to use in evaluation and that there are usually many OSS projects available to solve a particular problem. In this study, the authors propose a set of criteria and a methodology for assessing candidate OSS for fitness of purpose using both functional and non-functional factors. The authors then use these criteria in an improved solution to the decision problem using the well-developed Analytical Hierarchy Process. In order to validate the proposed model, it is applied at a technology management company in the United Arab Emirates, which integrates many OSS solutions into its Information Technology infrastructure. The contribution of this work is to help decision makers to better identify an appropriate OSS solution using a systematic approach without the need for intensive performance testing.
Chapter Preview
Top

Introduction

The terms “Open Source Software” (OSS), “Free Software”, “Free Open Source Software” (FOSS), and “Free/Libre Open Source Software” (FLOSS) are often treated synonymously (Feller & Fitzgerald, 2002; Feller et al., 2005; Koch, 2005). When we look at their respective license agreements, however, we can easily see that they are quite different. Free software is generally licensed with the GNU General Public License (GPL), while OSS may use either the GPL or some other license that allows for the integration of software that may not be free (Elliott & Scacchi, 2008; Gay, 2002). Free software is always available as OSS, but OSS is not always free software. Therefore it is more appropriate to refer to FOSS or FLOSS instead of the more general term “open source” in order to differentiate between the two different models and preserve the original meaning of the free software/FOSS/FLOSS. We would like to note that in this paper, when appropriate, we used terms specific to either free software or OSS when such differentiation is necessary.

Since most OSS is free to use and modify with no licensing fees, it is attractive for use by many including government, businesses, and non-profits (Feller, Fitzgerald, Hissam, & Lakhani, 2005). However, it can be difficult to evaluate or choose the right OSS. One of the unique challenges of evaluating OSS is the sheer number of OSS projects available (Feller & Fitzgerald, 2002). Anyone can create an OSS project on a free hosting site such as SourceForge.net. This low barrier to entry means that many OSS software projects are very immature (Deprez & Alexandre, 2008; Gacek & Arief, 2004). Another challenge is that OSS projects often have little documentation (Wheeler, 2005). Without proper documentation and user manuals that traditionally accompany commercial software, it can be difficult to confirm an OSS’ feature set.

Balancing the challenges of evaluating OSS software are the unique advantages provided. The biggest advantage is that the source code is available for analysis, which is vital in determining whether the software is of high quality and is maintainable. Another advantage is that many OSS projects provide public read-only access to their issue tracking system, which can give valuable insight into how fast the project is growing, whether defects are being found and fixed, and the amount of time it takes to resolve issues.

Selecting appropriate OSS for a given problem or a set of requirements can be very challenging. Some of the difficulties are due to the fact that there is not a generally accepted set of criteria to use in evaluation, and that there are usually many OSS projects available to solve a particular problem. Therefore the evaluation is often done in an ad hoc manner, using whatever criteria were available to the evaluators (Conradi, Bunse, Torchiano, Slyngstad, & Morisio, 2009; Norris, 2004). This kind of approach leads to evaluations that are not systematic or standardized within or between organizations, and are not repeatable, which in turn, could slow down project development. Another known problem is that, the evaluation process often lack operational approach where not everybody is involved in the evaluation process (Merilinna & Matinlassi, 2006; Torchiano & Morisio, 2004).

Complete Chapter List

Search this Book:
Reset