Measuring and Dealing with the Uncertainty of SOA Solutions

Measuring and Dealing with the Uncertainty of SOA Solutions

Yuhui Chen, Anatoliy Gorbenko, Vyachaslav Kharchenko, Alexander Romanovsky
DOI: 10.4018/978-1-60960-794-4.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The chapter investigates the uncertainty of Web Services performance and the instability of their communication medium (the Internet), and shows the influence of these two factors on the overall dependability of SOA. We present our practical experience in benchmarking and measuring the behaviour of a number of existing Web Services used in e-science and bio-informatics, provide the results of statistical data analysis and discuss the probability distribution of delays contributing to the Web Services response time. The ratio between delay standard deviation and its average value is introduced to measure the performance uncertainty of a Web Service. Finally, we present the results of error and fault injection into Web Services. We summarise our experiments with SOA-specific exception handling features provided by two web service development kits and analyse exception propagation and performance as the major factors affecting fault tolerance (in particular, error handling and fault diagnosis) in Web Services.
Chapter Preview
Top

Introduction

The paradigm of Service-Oriented Architecture (SOA) is a further step in the evolution of the well-known component-based system development with Off-the-Shelf components. SOA and Web Services (WSs) were introduced to ensure effective interaction of complex distributed applications. They are now evolving within critical infrastructures (e.g. air traffic control systems), holding various business systems and services together (for example, banking, e-health, etc.). Their ability to compose and implement business workflows provides crucial support for developing globally distributed large-scale computing systems, which are becoming integral to society and the economy.

Unlike common software applications, however, Web Services work in an unstable environment as part of globally-distributed and loosely-coupled SOAs, communicating with a number of other services deployed by third parties (e.g. in different administration domains), typically with unknown dependability characteristics. When complex service-oriented systems are dynamically built or when their components are dynamically replaced by the new ones with the same (or similar) functionality but unknown dependability and performance characteristics, ensuring and assessing their dependability becomes genuinely complicated. It is this fact that is the main motivation for this work.

By their very nature Web Services are black boxes, as neither their source code, nor their complete specification, nor information about their deployment environments are available; the only known information about them is their interfaces. Moreover, their dependability is not completely known and they may not provide sufficient Quality of Service (QoS); it is often safer to treat them as “dirty” boxes, assuming that they always have bugs, do not fit well enough, and have poor specification and documentation. Web Services are heterogeneous, as they might be developed following different standards, fault assumptions and different conventions, and may use different technologies. Finally, Service-Oriented Systems are built as overlay networks over the Internet and their construction and composition are complicated by the fact that the Internet is a poor communication medium (e.g., it has low quality and is not predictable).

Therefore, users cannot be confident of their availability, trustworthiness, reasonable response time and other dependability characteristics (Avizienis et al, 2004), as these can vary over wide ranges in a very random and unpredictable manner. In this work we use the general synthetic term uncertainty to refer to the unknown, unstable, unpredictable, changeable characteristics and behaviour of Web Services and SOA, exacerbated by running these services over the Internet. Dealing with such uncertainty, which in the very nature of SOA, is one of the main challenges that researchers are facing.

To become ubiquitous, Service-Oriented Systems should be capable of tolerating faults and potentially-harmful events caused by a variety of reasons, including low or varying (decreasing) quality of components (services), shifting characteristics of the network media, component mismatches, permanent or temporary faults of individual services, composition mistakes, and service disconnection, changes in the environment and in the policies.

The dependability and QoS of SOA has recently been the aim of significant research effort. A number of studies (Zheng, & Lyu, 2009; Maamar, Sheng, & Benslimane, 2008; Fang et al., 2007) have introduced several approaches to incorporating resilience techniques (including voting, backward and forward error recovery mechanisms and replication techniques) into WS architectures. There has been work on benchmarking and experimental measurements of dependability (Laranjeiro, Vieira, & Madeira, 2007; Duraes, Vieira, & Madeira, 2004; Looker, Munro, & Xu, 2004) as well as dependability and performance evaluation (Zheng, Zhang, & Lyu, 2010). But even though the existing proposals offer useful means for improving SOA dependability by enhancing particular WS technologies, most of them do not address the uncertainty challenge which exacerbates the lack of dependability and varying quality.

Complete Chapter List

Search this Book:
Reset