OOPS! (OntOlogy Pitfall Scanner!): An On-line Tool for Ontology Evaluation

OOPS! (OntOlogy Pitfall Scanner!): An On-line Tool for Ontology Evaluation

María Poveda-Villalón, Asunción Gómez-Pérez, Mari Carmen Suárez-Figueroa
Copyright: © 2014 |Pages: 28
DOI: 10.4018/ijswis.2014040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper presents two contributions to the field of Ontology Evaluation. First, a live catalogue of pitfalls that extends previous works on modeling errors with new pitfalls resulting from an empirical analysis of over 693 ontologies. Such a catalogue classifies pitfalls according to the Structural, Functional and Usability-Profiling dimensions. For each pitfall, we incorporate the value of its importance level (critical, important and minor) and the number of ontologies where each pitfall has been detected. Second, OOPS! (OntOlogy Pitfall Scanner!), a tool for detecting pitfalls in ontologies and targeted at newcomers and domain experts unfamiliar with description logics and ontology implementation languages. The tool operates independently of any ontology development platform and is available online. The evaluation of the system is provided both through a survey of users' satisfaction and worldwide usage statistics. In addition, the system is also compared with existing ontology evaluation tools in terms of coverage of pitfalls detected.
Article Preview
Top

Introduction

The Linked Data (LD) effort has become a catalyst for the realization of the vision of the Semantic Web originally proposed by Berners-Lee et al. (2001). In this scenario, a large amount of data, annotated by means of ontologies, is shared on the Web. Such ontologies enrich the published data with semantics and help their integration. In other cases, ontologies are used to model data automatically extracted from web sources, which can be noisy and contain errors. Therefore, ontologies not only must be published according to LD principles1, but they also must be accurate and of high quality from a knowledge representation perspective in order to avoid inconsistencies or undesired inferences.

The correct application of ontology development methodologies (e.g., METHONTOLOGY (Fernández-López et al., 1999), On-To-Knowledge (Staab et al., 2001), DILIGENT (Pinto, Tempich, & Staab, 2004), or the NeOn Methodology (Suárez-Figueroa et al., 2012)) benefits the quality of the ontology being built. However, such a quality is not totally guaranteed because ontologists face a wide range of difficulties and handicaps when modeling ontologies (Aguado de Cea et al., 2008; Blomqvist, Gangemi, & Presutti, 2009; Rector et al., 2004), and this fact may cause the appearance of anomalies in ontologies. Therefore, in any ontology development project it is vital to perform the ontology evaluation activity since this activity checks the technical quality of an ontology against a frame of reference.

In the last decades a huge amount of research and work on ontology evaluation has been conducted. Some of these attempts define a generic quality evaluation framework (Duque-Ramos et al., 2011; Gangemi et al., 2006; Gómez-Pérez, 2004; Guarino, & Welty, 2009; Strasunskas, & Tomassen, 2008); others propose evaluating an ontology depending on its final (re)use (Suárez-Figueroa, 2010); some others propose quality models based on features, criteria, and metrics (Burton-Jones et al, 2005); whereas others present methods for pattern-based evaluation (Djedidi, & Aufaure, 2010; Presutti et al., 2008).

As a consequence of the emergence of new methods and techniques, a few tools have been proposed. These tools ease the ontology diagnosis by reducing the human intervention. This is the case of XD-Analyzer2, a plug-in for NeOn Toolkit and Ontocheck3 (Schober et al., 2012), a plug-in for Protégé. The former checks some structural and architectural ontology features, whereas the latter focuses on metadata aspects. Moki4 (Pammer, 2010), a wiki-based ontology editor, also provides some evaluation features. Finally, Radon (Ji et al., 2009) is a NeOn Toolkit plug-in that detects and handles logical inconsistencies in ontologies.

This paper presents two main contributions. The first contribution consists of a live and on-line catalogue of pitfalls5 that extends previous works on modeling errors (Allemang, & Hendler, 2011; Gómez-Pérez, 2004; Noy, & McGuinness, 2001; Rector et al., 2004) identified in the ontology engineering field including some persistent problems of accessibility emerging in the Linked Data field (Archer, Goedertier, & Loutas, 2012; Heath, & Bizer, 2011; Hogan et al., 2010). The second contribution, OOPS! (OntOlogy Pitfall Scanner!) represents a tool for diagnosing (semi-)automatically OWL6 ontologies. This system aims to help ontology developers to evaluate ontologies and is focused on newcomers and those not familiar with description logics and ontology implementation languages. OOPS! operates independently of any ontology development platform and is available online at http://www.oeg-upm.net/oops. It should be noted here that the repair of the ontology is out of the scope of OOPS!.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing