A Study of Contemporary System Performance Testing Framework

A Study of Contemporary System Performance Testing Framework

Alex Ng (Federation University, Australia) and Shiping Chen (CSIRO Data61, Australia)
Copyright: © 2018 |Pages: 14
DOI: 10.4018/978-1-5225-2255-3.ch658
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Performance testing is one of the vital activities spanning the whole life cycle of software engineering. As a result, there are a considerable number of performance testing products and open source tools available. It has been observed that most of the existing performance testing products and tools are either too expensive and complicated for small projects, or too specific and simple for diverse performance tests. In this chapter, we will present an overview of existing performance test products/tools, provide a summary of some of the contemporary system performance testing frameworks, and capture the key requirements for a general-purpose performance testing framework. Based on our previous works, we propose a system performance testing framework which is suitable for both simple and small, as well as complicated and large-scale performance testing projects. The core of our framework contains an abstraction to facilitate performance testing by separating the application logic from the common performance testing functionality, and a set of general-purpose data model.
Chapter Preview
Top

Background

According to (Meier, Farre, Bansode, Barber, & Rea, 2007), Performance Testing is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test.

(Meier et al., 2007) classify performance metrics into the following categories:

  • Network-Specific Metrics: A set of metrics about the overall behavior of the network used to support the system.

  • System-Related Metrics: A set of metrics helps identify the resource utilization of the system.

  • Platform-Specific Metrics: A set of metrics related to software that is used to host the application system, such as the Microsoft .NET Framework common language runtime (CLR) and ASP.NET-related metrics.

  • Application-Specific Metrics: These include custom performance counters inserted into the application code to monitor application health and identify performance issues.

  • Service-Level Metrics: A set of metrics help measure overall application throughput and latency, or they might be tied to specific business scenarios.

  • Business Metrics: These metrics are indicators of business-related information, such as the number of orders placed in a given timeframe for a particular department.

There are some common system performance metrics for enterprise systems, such as: Response Time, Latency, and Throughput. In some contexts it’s customary to call these things by different names: Throughput and Response Time, or Capacity and Delay, or Bandwidth and Latency. We provide the following definitions to avoid ambiguity:

Key Terms in this Chapter

CPU Utilization: Determines the percentage of time the processor is busy by measuring the percentage of time the thread of the idle process is running and then subtracting that from 100 percent.

Response Time: The total time taken by a user to wait in invoking a request to a particular system and coming back with a result.

Disk I/O Utilization: The amount of input and output operations on the secondary storage at a certain unit of time.

Main Memory Utilization: The amount of RAM used by a particular system at a certain unit of time.

Bandwidth: The amount of information that can be transmitted in a fixed amount of time over a particular media channel.

System Performance Testing: A special type of technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the system under test.

Scalability: A measure of the capability of a system to increase its total output under an increased load when resources (typically hardware) are added.

Throughput: The capability of a system in handling a certain number of client requests (such as messages or transactions) within a unit of time (such as second or minute).

Latency: The total time spent by a system generated message to travel from its sender (source) to its receiver (destination).

Speedup: A measure of the relative performance improvement when executing a task. Speedup can be defined by studying the different types of performance metrics such as: throughput and latency.

Complete Chapter List

Search this Book:
Reset