XML Benchmarking: The State of the Art and Possible Enhancements

XML Benchmarking: The State of the Art and Possible Enhancements

Irena Mlynkova
DOI: 10.4018/978-1-60566-308-1.ch014
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

Since XML technologies have become a standard for data representation, numerous methods for processing XML data emerge every day. Consequently, it is necessary to compare the newly proposed methods with the existing ones, as well as analyze the effect of a particular method when applied to various types of data. In this chapter, the auhtors provide an overview of existing approaches to XML benchmarking from the perspective of various applications and show that to date the problem has been highly marginalized. Therefore, in the second part of the chapter they discuss persisting open issues and their possible solutions.
Chapter Preview
Top

Introduction

Since XML (Bray et al., 2006) became a de-facto standard for data representation and manipulation, numerous methods have been proposed for efficiently managing, processing, exchanging, querying, updating and compressing XML documents. And new proposals emerge every day. Naturally, each author performs various experimental tests using the newly proposed method and describes its advantages and disadvantages. But, it can be very difficult for a future user to decide which of the existing approaches is the most suitable for his/hers particular requirements on the basis of the descriptions of methods. The problem is that various methods are usually tested on different data sets derived from diverse sources which either do not yet exist or which were created only for the testing purposes, with special requirements of particular applications etc.

An author of a new method will encounter a similar problem whenever he/she wants to compare the new proposal with an existing one. This is possible only if the source or executable files of the existing method or, at least, identical testing data sets are available. But, too often it is impossible to have access to this information. In addition, in the latter case, the performance evaluation is limited by the testing set whose characteristics are often unknown. Hence, a reader finds it difficult to obtain a clear notion of the analyzed situation.

An analogous problem occurs if we want to test the behaviour of a particular method on various types of data, or determine the correlation between the efficiency of the method and changing complexity of the input data. Not even the process of gathering the testing data sets is simple. Firstly, the real-world XML data usually contain a huge number of errors (Mlynkova et al., 2006) which need to be corrected. And what is worse, the real-world data sets are usually surprisingly simple and do not cover all constructs allowed by XML specifications.

Currently, there exist several projects which provide a set of testing XML data collections (usually together with a set of testing XML operations) that are publicly available and well-described. We can find either fixed (or gradually extended) databases of real-world XML data (e.g. project INEX (INEX, 2007)) or projects which enable us to generate synthetic XML data on the basis of user-specified characteristics (e.g. project XMark (Busse, 2003)). But, in the former case, we are limited by the characteristics of the testing set; whereas, in the latter case, the characteristics of the generated data that can be specified are trivial (such as the amount and size of the data).

Complete Chapter List

Search this Book:
Reset