An Open Source Software Evaluation Model

An Open Source Software Evaluation Model

Joel P. Confino, Phillip A. Laplante
DOI: 10.4018/jsita.2010101505
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The allure of free, industrial-strength software has many enterprises rethinking their open source strategies. However, selecting an appropriate open source software for a given problem or set of requirements is very challenging. The challenges include a lack of generally accepted evaluation criteria and a multitude of eligible open source software projects. The contribution of this work is a set of criteria and a methodology for assessing candidate open source software for fitness of purpose. To test this evaluation model, several important open source projects were examined. The results of this model were compared against the published results of an evaluation performed by the Defense Research and Development Canada agency. The proposed evaluation model relies on publicly accessible data, is easy to perform, and can be incorporated into any open source strategy.
Article Preview
Top

Introduction

“Open source software” implies more than just freely distributing source code. A generally accepted definition for open source software relies on 10 criteria (Coar, 2006):

  • The software must be freely distributed without fee

  • The source code must be freely accessible

  • The software license must allow modifications and derived works

  • The license may prohibit derived works from using the author’s name so there is no confusion of authorship

  • There must be no discrimination of use against persons or groups

  • The license cannot limit the use of the software to a particular field or business or industry

  • The license must be freely distributed along with the software

  • The license must be contingent on the software being distributed in a particular product or package

  • The license can make no claims about other software that may be distributed along with licensed software

  • The license cannot dictate that only a particular technology be used with it

Open source software is widely used by government, businesses, and non-profits alike because of the financial benefits. However, even though the software is “free” it is critical to select the right open source software solution for a given situation.

It can be difficult to evaluate any type of software for fitness of purpose, but open source software presents unique challenges and advantages. One of the unique challenges is the sheer number of open source projects. Anyone can create an open source project on free hosting sites such as SourceForge.net and many of these projects are very immature (Polancic, Horvat, & Rozman, 2004). Another challenge is that open source software rarely provides documentation (Wheeler, 2009). Without the documentation and marketing materials that traditionally accompany commercial software, it can be difficult to determine the features of an open source software product.

Balancing the challenges of evaluating open source software are significant advantages, mainly that the executable and source codes are freely available. Another advantage is that many open source projects provide public read-only access to their problem tracking system, which can give valuable insight into how fast the project is growing whether defects are being found and fixed, and the amount of time it takes to add new features.

Top

There are at least four open source software evaluation models. Capgemini (2003) created the Open Source Maturity Model (OSMM) employing 12 weighted criteria, which can be input into a spreadsheet for evaluation. Navica (2004) created the Open Source Maturity Model (OSMM), which assesses six key software, and contains spreadsheets to assist in the evaluation (Golden, 2004). It is coincidental that both the Capgemini and Navica models use the same abbreviations.

The Qualification and Selection of Open Source software method (QSOS) was created in 2004 and sponsored by the consulting company Atos Origin. QSOS contains a set of evaluation criteria, and provides web-based tools to assist in the evaluation process. The website also contains a database of prior evaluations (QSOS, 2004).

The Business Readiness Rating (BRR) was created in 2005 and sponsored by Carnegie Mellon West, SpikeSource, O’Reilly, and Intel. The BRR provides evaluation criteria and spreadsheets for evaluation as well (BRR, 2007).

Even though it is not a formal evaluation model, another important work is Wheeler’s (2008) “How to Evaluate Open Source Software/Free Software (OSS/FS) Programs.”

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing