Dependability and Fault-Tolerance: Basic Concepts and Terminology

Dependability and Fault-Tolerance: Basic Concepts and Terminology

Vincenzo De Florio (PATS Research Group, University of Antwerp and iMinds, Belgium)
Copyright: © 2009 |Pages: 20
DOI: 10.4018/978-1-60566-182-7.ch001
OnDemand PDF Download:
No Current Special Offers


The general objective of this chapter is to introduce the basic concepts and terminology of the domain of dependability. Concepts such as reliability, safety, or security, have been used inconsistently by different communities of researchers: The realtime system community, the secure computing community, and so forth, each had its own “lingo” and was referring to concepts such as faults, errors, and failures without the required formal foundation. This changed in the early 1990s, when Jean-Claude Laprie finally introduced a tentative model for dependable computing. To date, the Laprie model of dependability is the most widespread and accepted formal definition for the terms that play a key role in this book. As a consequence, the rest of this chapter introduces that model.
Chapter Preview

Dependability, Resilient Computing And Fault-Tolerance

As just mentioned the central topic of this chapter is dependability, defined in (Laprie, 1985) as the trustworthiness of a computer system such that reliance can justifiably be placed on the service it delivers. In this context, service means the manifestations of a set of external events perceived by the user as the behavior of the system (Avižienis, Laprie, & Randell, 2004) user means another system, e.g., a human being, or a physical device, or a computer application, interacting with the former one.

The concept of dependability as described herein was first introduced by Jean-Claude Laprie (Laprie, 1985) as a contribution to an effort by IFIP Working Group 10.4 (Dependability and Fault-Tolerance) aiming at the establishment of a standard framework and terminology for discussing reliable and fault-tolerant systems. The cited paper and other works by Laprie are the main sources for this chapter—in particular (Laprie, 1992), later revised as (Laprie, 1995) and (Laprie, 1998). A more recent work in this framework is (Avižienis, Laprie, Randell, & Landwehr, 2004). Professor Laprie is continuously revising his model, also with the contributions of various teams of researchers in Europe and abroad—let me just cite here the EWICS TC7 (European Workshop on Industrial Computer Systems Reliability, Safety and Security, technical committee 7), whose mission is “To promote the economical and efficient realization of programmable industrial systems through education, information exchange, and the elaboration of standards and guidelines” (EWICS, n.d.), and the ReSIST network of excellence (ReSIST, n.d.), boasting a 50-million items resilience knowledge base (Anderson, Andrews, & Fitzgerald, 2007), which developed a resilient computing curriculum recommended to all involved in teaching dependability-related subjects.

Laprie’s is the most famous and accepted definition of dependability, but it is certainly not the only one. Not surprisingly, due to the societal relevance of such a topic, dependability has also slightly different definitions (Motet & Geffroy, 2003). According to Sommervilla (Sommervilla, 2006), for instance, dependability is “The extent to which a critical system is trusted by its users”. This is clearly a definition that focuses more on how the user perceives the system than on how the system actually is trustworthy. It reflects the extent of the user’s confidence that the system will operate as users expect and in particular without failures.

In other words, dependability is considered by Sommervilla and others as a measure of the quality of experience of a given user and a given service. From this descends that the objective of dependability engineers is not to make services failure-proof, but to let its users believe so! Paraphrasing Patterson and Hennessy (Patterson & Hennessy, 1996), if a particular hazard does not occur very frequently, it may not be worth the cost to avoid it. This means that residual faults are not only inevitable, but sometimes even expected. It’s the notion of “dependability economics”: Because of the very high costs of dependability achievement, in some cases it may be more cost effective to accept untrustworthy systems and pay for failure costs. This is especially relevant when time-to-market is critical to a product’s commercial success. Reaching the market sooner with a sub-optimal product may bring more revenues than doing so with a perfectly reliable product surrounded by early bird competitors that have already captured the interest and trust of the public.

Complete Chapter List

Search this Book: