Quality of Service Monitoring Strategies in Service Oriented Architecture Environments using Processor Hardware Performance Metrics

Quality of Service Monitoring Strategies in Service Oriented Architecture Environments using Processor Hardware Performance Metrics

Ernest Sithole, Sally McClean, Bryan Scotney, Gerard Parr, Adrian Moore, Dave Bustard, Stephen Dawson
DOI: 10.4018/978-1-61350-432-1.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The sharp growth in data-intensive applications such as social, professional networking and online commerce services, multimedia applications, as well as the convergence of mobile, wireless, and internet technologies, is greatly influencing the shape and makeup of on-demand enterprise computing environments. In response to the global needs for on-demand computing services, a number of trends have emerged, one of which is the growth of computing infrastructures in terms of the number of computing node entities and the widening in geophysical distributions of deployed node elements. Another development has been the increased complexity in the technical composition of the business computing space due to the diversity of technologies that are employed in IT implementations. Given the huge scales in infrastructure sizes and data handling requirements, as well as the dispersion of compute nodes and technology disparities that are associated with emerging computing infrastructures, the task of quantifying performance for capacity planning, Service Level Agreement (SLA) enforcements, and Quality of Service (QoS) guarantees becomes very challenging to fulfil. In order to come up with a viable strategy for evaluating operational performance on computing nodes, we propose the use of on-chip registers called Performance Monitoring Counters (PMCs), which form part of the processor hardware. The use of PMC measurements is largely non-intrusive and highlights performance issues associated with runtime execution on the CPU hardware architecture. Our proposed strategy of employing PMC data thus overcomes major shortcomings of existing benchmarking approaches such as overheads in the software functionality and the inability to offer detailed insight into the various stages of CPU and memory hardware operation.
Chapter Preview
Top

Major Developments And Challenges In On-Demand Computing

Given the current developments in eEnterprise implementations, the infrastructure planning tasks of accurately determining performance that can be delivered by on-demand computing resources and in turn, obtaining accurate estimates of the appropriate infrastructure hardware performance capabilities required for business computing solutions are becoming an increasingly challenging exercise to undertake. One major cause that has led to the difficulty in quantifying performance in business computing systems has been the huge amounts of user-generated data as a result of the exponential adoption of on-demand hosted computing services and applications such as social networking, e-commerce, and multi-media content sharing services. Yet another development that has caused a huge increase in consumer-generated data is the convergence of mobile, wireless and web technologies into a ubiquitous computing platform. In response to the challenges arising from these major trends, there has been a phenomenal growth in the size of deployed computing infrastructures, with the magnitude of the infrastructure expansion being characterised by three main dimensions of growth: (a) the increases in the number of computing machines brought together to form server domains, (b) the wide geographic locations, over which participant server nodes are physically deployed and (c) the different types of technologies that are used to produce computing solutions.

Challenges for Performance Evaluation

Arising from the physical distribution of compute nodes due to the dispersion of resources in the infrastructures, are the performance-related challenges pertaining to the need to quantify network delays. The delays emanate from the communications of status and coordination messages as well as the actual data transfers between host machines. The calculation of overall performance metrics in distributed systems, which are dependent on network delays, is not straightforward to perform given that application routines running inside server nodes generate data in quantities that can vary dynamically. As a result of the changing loads, traffic levels introduced on network links usually follow irregular patterns leading to congestion and bandwidth delays that cannot be easily established. It is important to emphasize that while network delays do not feature in the proposed performance evaluation strategy considered in the latter sections of this chapter, they nevertheless make a key aspect which is captured in the Service Taxonomy presented in Figure 4.

Figure 4.

Identifying performance components from service taxonomy

978-1-61350-432-1.ch005.f04

In most business computing solutions, the server nodes that are networked together usually come from multiple vendors, which mean that the resulting infrastructure is a broad collection of non-uniform components with considerable disparities in their functional design. The resultant heterogeneity of the assembled hardware architectures means that a uniform approach cannot be applied in calibrating performance on each of the individual machines and, in turn, the calculations of overall performance for compute service implementations that run on heterogeneous resources will be challenging to perform.

The adoption of various middleware technologies in crafting IT solutions is a further obstacle to the tasks of quantifying performance levels in business computing infrastructures and of estimating infrastructure capacity to match projected future workload levels from user environments. Some of the popular middleware-driven strategies that are employed in developing applications and business processes use SOA-based approaches such as Representational State Transfer (REST), Simple Object Access Protocol (SOAP) and Common Object Request Broker Architecture (CORBA) as well as database packages. On the infrastructure resource fabric, middleware-based approaches for enabling service provision usually take the form of Grid or Cloud computing strategies. From the execution of the basic protocol and other support services provided in the middleware functionality, software-based overheads are introduced to program operations, thus subjecting the output performance metrics of the middleware-supported solutions to additional latencies. The impact of middleware overheads on the overall performance depends on the combination of technology packages that are employed in developing the user applications and the resource infrastructure solutions. As in the case of network delays, this section highlights the impact of middleware overheads on performance given their relevance to the Service Taxonomy as shown by the classifications for Interfacing Definitions, Service and Other Properties in Figure 4.

Key Terms in this Chapter

Performance Monitoring Counters: The set of special registers incorporated in modern CPU chipsets that serve the purpose of tracking the performance of low-level operational events as they execute at processor cores, cache memory, main memory and Input/Output stages of server hardware.

Quality of Service: The range or agreed boundaries within which the overall output performance that is delivered to the end user has to remain in order that minimum requirements for satisfactory service are maintained in the consumer environment.

Performance: The quantitative measure of a computing implementation’s capabilities to deliver specific functions, which it has been designed and set up to serve. Key performance measures are quoted in terms of throughput rates, response/completion times of received work items and scalability trends. The scalability trends describe the sensitivity of the captured metrics to operational changes such as increases in workload levels and changes to physical configurations of the computing implementations.

Service-oriented Architecture Taxonomy: The set of classifications that are intended to convey key architectural, functional and performance attributes of the service oriented-based computing implementations.

Service Level Agreement: The range or agreed boundaries within which the performance in the computing infrastructure has to be maintained by service providers in order to meet the minimum requirements for satisfactory service that is to be ultimately delivered to the consumer environment.

Profiling Tools: The middleware packages that provide interfacing capability to performance monitoring counters so that the performance associated with low-level hardware events being tracked by monitoring counters can be conveyed to the user environment for processing, presentation and interpretation.

Operational Performance: The quantitative measure of a computing implementation’s capabilities with particular focus being on the functionality that is provided by the physical low-level activities executing in the resource fabric’s CPU, memory, disk and network hardware elements. Operational performance does not take into account the impact of software-related functions at platform, middleware and application levels.

Complete Chapter List

Search this Book:
Reset