A Survey of Ontology Benchmarks for Semantic Web Ontology Tools

A Survey of Ontology Benchmarks for Semantic Web Ontology Tools

Ondřej Zamazal (University of Economics, Prague, Czech Republic)
Copyright: © 2020 |Pages: 22
DOI: 10.4018/IJSWIS.2020010103

Abstract

Software engineering employs different benchmarks for a software evaluation. This enables software developers to continuously improve their product. The same needs are intrinsic for software tools in the semantic web field. While there are many different benchmarks already available, there has not been their overview and categorization yet. This work provides such an overview and categorization of benchmarks specifically oriented on benchmarks where an ontology plays an important role. Benchmarks are naturally categorized in line with ontology tool categorization along with an indication which activities those benchmarks are deliberate and which are non-deliberative. Although the article itself can already navigate a reader to an adequate benchmark, we moreover automatically designed a flexible rule-based recommendation tool based on the analysis of existing benchmarks.
Article Preview
Top

1. Introduction

Knowledge representation is an important part of intelligent systems. Knowledge is often represented using ontology as a “formal specification of a shared conceptualization.” (Borst, 1997). While ontologies are used in many computer science fields, their expansion is mainly connected to Web where they foster semantic web efforts. The main idea of semantic web is to build intelligent software agents which could cooperate on a huge web space to accomplish tasks for users.

Semantic web is enabled by its architecture, the semantic web stack (Figure 1). This stack is fully rooted within the traditional web which is reflected by the traditional web technologies positioned at lower parts of the stack and dealing with a data transferring (HTTP), resource identification (IRI), character encoding (UNICODE) and data serialization (XML). Core semantic web technologies are placed above the traditional ones and they deal with data representation and interchange, Resource Description Framework (RDF).1 Further, there is a semantically oriented RDF Schema (RDFS)2 which enables us to construct simple ontologies by specifying classes of resources, relationships among resources using properties, domain and range of properties, taxonomy of classes and properties. This language can be extended by using the language constructs from Web Ontology Language (OWL) as explained in (Hitzler et al., 2009).

Figure 1.

The Semantic Web Stack

IJSWIS.2020010103.f01

Further, there are technologies for querying RDF data, e.g., Simple Protocol and RDF Query Language (SPARQL)3 and for capturing rules, e.g., Rule Interchange Format (RIF)4 beyond description logics. Proof and logics relate to the different technologies on layers below. For example, the primary purpose of developing ontologies was an option to infer an implicit taxonomy between classes and a categorization of individuals. This is realized by ontology reasoners.

Further layers, i.e., a trust and a cryptography, cope with technologies whose employment in the semantic web is still under a development and which should enhance a credibility to use of the semantic web applications. Each semantic web technology can be supported by corresponding semantic web tools, e.g., authoring RDFS or OWL ontologies, querying ontologies using SPARQL etc. In this article, we focus on semantic web tools which cope with semantic layers of the semantic web stack and particularly with ontologies, from now on shortly named as ontology tools. While an end user interacts with a final semantic web application, ontology tools are rather intended for semantic web application developers.

In order to continuously enhance the quality of software, software engineering applies benchmarking as a method of measuring performance against a standard, or given set of standards in (Weiss, 2002). On the contrary to the software evaluation, benchmarking aims at a continuous improvement with regards to a given set of standards known as benchmarks. Software evaluation and benchmarking are also important testing activities for semantic web tools. While terminology about evaluation and benchmarking is not uniquely used within semantic web, we will consider evaluation as rather ad-hoc software testing and benchmarking as a recurrent measuring activity related to the standard benchmark suite. We can thus define benchmarking as follows:

Complete Article List

Search this Journal:
Reset
Open Access Articles
Volume 16: 4 Issues (2020): 2 Released, 2 Forthcoming
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing