Database Benchmarks

Database Benchmarks

Jérôme Darmont (ERIC, University of Lyon 2, France)
DOI: 10.4018/978-1-60566-026-4.ch151
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Performance measurement tools are very important, both for designers and users of Database Management Systems (DBMSs). Performance evaluation is useful to designers to determine elements of architecture, and, more generally, to validate or refute hypotheses regarding the actual behavior of a DBMS. Thus, performance evaluation is an essential component in the development process of well-designed and efficient systems. Users may also employ performance evaluation, either to compare the efficiency of different technologies before selecting a DBMS, or to tune a system. Performance evaluation by experimentation on a real system is generally referred to as benchmarking. It consists of performing a series of tests on a given DBMS to estimate its performance in a given setting. Typically, a benchmark is constituted of two main elements: a database model (conceptual schema and extension), and a workload model (set of read and write operations) to apply on this database, following a predefined protocol. Most benchmarks also include a set of simple or composite performance metrics such as response time, throughput, number of input/output, disk or memory usage, and so forth. The aim of this article is to present an overview of the major families of state-of-the-art database benchmarks, namely, relational benchmarks, object and object-relational benchmarks, XML benchmarks, and decision-support benchmarks; and to discuss the issues, tradeoffs, and future trends in database benchmarking. We particularly focus on XML and decision-support benchmarks, which are currently the most innovative tools that are developed in this area.
Chapter Preview
Top

Introduction

Performance measurement tools are very important, both for designers and users of Database Management Systems (DBMSs). Performance evaluation is useful to designers to determine elements of architecture, and, more generally, to validate or refute hypotheses regarding the actual behavior of a DBMS. Thus, performance evaluation is an essential component in the development process of well-designed and efficient systems. Users may also employ performance evaluation, either to compare the efficiency of different technologies before selecting a DBMS, or to tune a system.

Performance evaluation by experimentation on a real system is generally referred to as benchmarking. It consists of performing a series of tests on a given DBMS to estimate its performance in a given setting. Typically, a benchmark is constituted of two main elements: a database model (conceptual schema and extension), and a workload model (set of read and write operations) to apply on this database, following a predefined protocol. Most benchmarks also include a set of simple or composite performance metrics such as response time, throughput, number of input/output, disk or memory usage, and so forth.

The aim of this article is to present an overview of the major families of state-of-the-art database benchmarks, namely, relational benchmarks, object and object-relational benchmarks, XML benchmarks, and decision-support benchmarks; and to discuss the issues, tradeoffs, and future trends in database benchmarking. We particularly focus on XML and decision-support benchmarks, which are currently the most innovative tools that are developed in this area.

Top

Background

Relational Benchmarks

In the world of relational DBMS benchmarking, the Transaction Processing Performance Council (TPC) plays a preponderant role. The mission of this non-profit organization is to issue standard benchmarks, to verify their correct application by users, and to regularly publish performance tests results. Its benchmarks all share variants of a classical business database (customer-order-product-supplier) and are only parameterized by a scale factor that determines the database size (e.g., from 1 to 100,000 GB).

The TPC benchmark for transactional databases, TPC-C (TPC, 2005a), has been in use since 1992. It is specifically dedicated to On-Line Transactional Processing (OLTP) applications, and features a complex database (nine types of tables bearing various structures and sizes), and a workload of diversely complex transactions that are executed concurrently. The metric in TPC-C is throughput, in terms of transactions.

There are currently few credible alternatives to TPC-C. Although, we can cite the Open Source Database Benchmark (OSDB), which is the result of a project from the free software community (SourceForge, 2005). OSDB extends and clarifies the specifications of an older benchmark, AS3AP. It is available as free C source code, which helps eliminate any ambiguity relative to the use of natural language in the specifications. However, it is still an ongoing project and the benchmark’s documentation is very basic. AS3AP’s database is simple: it is composed of four relations whose size may vary from 1 GB to 100 GB. The workload is made of various queries that are executed concurrently. OSDB’s metrics are response time and throughput.

Key Terms in this Chapter

Performance Metrics: Simple or composite metrics aimed at expressing the performance of a system.

Benchmark: A standard program that runs on different systems to provide an accurate measure of their performance.

Database Management System (DMBS): Software set that handles the structuring, storage, maintenance, update, and querying of data stored in a database.

Complete Chapter List

Search this Book:
Reset