A Study of Contemporary System Performance Testing Framework

A Study of Contemporary System Performance Testing Framework

Alex Ng (Federation University, Australia) and Shiping Chen (CSIRO Data61, Australia)
DOI: 10.4018/978-1-5225-7598-6.ch114

Abstract

Performance testing is one of the vital activities spanning the whole life cycle of software engineering. As a result, there are a considerable number of performance testing products and open source tools available. It has been observed that most of the existing performance testing products and tools are either too expensive and complicated for small projects or too specific and simple for diverse performance tests. In this chapter, the authors present an overview of existing performance test products/tools, provide a summary of some of the contemporary system performance testing frameworks, and capture the key requirements for a general-purpose performance testing framework. Based on previous works, the authors propose a system performance testing framework that is suitable for both simple and small as well as complicated and large-scale performance testing projects. The core of the framework contains an abstraction to facilitate performance testing by separating the application logic from the common performance testing functionality and a general-purpose data model.
Chapter Preview
Top

Background

According to (Meier, Farre, Bansode, Barber, & Rea, 2007), Performance Testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test.

(Meier et al., 2007) classify performance metrics into the following categories:

  • Network-Specific Metrics: A set of metrics about the overall behavior of the network used to support the system.

  • System-Related Metrics: A set of metrics helps identify the resource utilization of the system.

  • Platform-Specific Metrics: A set of metrics related to software that is used to host the application system, such as the Microsoft .NET Framework common language runtime (CLR) and ASP.NET-related metrics.

  • Application-Specific Metrics: These include custom performance counters inserted into the application code to monitor application health and identify performance issues.

  • Service-Level Metrics: A set of metrics help measure overall application throughput and latency, or they might be tied to specific business scenarios.

  • Business Metrics: These metrics are indicators of business-related information, such as the number of orders placed in a given timeframe for a particular department.

There are some common system performance metrics for enterprise systems, such as: Response Time, Latency, and Throughput. In some contexts it’s customary to call these things by different names: Throughput and Response Time, or Capacity and Delay, or Bandwidth and Latency. We provide the following definitions to avoid ambiguity:

Complete Chapter List

Search this Book:
Reset