Business Continuity and Business Continuity Drivers

Business Continuity and Business Continuity Drivers

Nijaz Bajgoric
DOI: 10.4018/978-1-60566-160-5.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The previous chapter introduced the two major concepts of continuous computing: downtime and uptime. Chapter three goes a step further and aims at defining business continuity (business continuance) and several measures of performances such as availability, reliability, and scalability. The main framework for research is defined. Based on this framework, a systemic model of business continuity, continuous computing and continuous computing technologies has been created. In addition, the “Onion” model of an information architecture for business continuity, IT-based business continuity drivers and technologies are identified in this chapter.
Chapter Preview
Top

Business Continuity: Introduction

Most of today’s businesses are under continuous pressure to keep their information systems running 24/7/365 and ensure data and applications are continously available. Basic business continuity terms are introduced in this section.

Business continuance, business resilience, fault-tolerance, disaster tolerance, fast and reliable data access, are just a few of the objectives of contemporary business. Business continuity strategy has become No1 item for both CEO’s and CIO’s priority list. Business continuity today relies on continuous computing technologies that provide an efficient operating environment for continuous computing. Implementation of continuous computing technologies provides a platform for “keeping business in business” since business-critical applications are installed on enterprise servers, run by server operating systems that include ServerWare components, backed-up by data storage systems and supported by several fault-tolerant and disaster-tolerant technologies.

The term “high availability” is associated with high system/application uptime which is measured in terms of “nines.” The more nines that represent availability ratio, the higher level of availability is provieed by the operating platfom. In addition to the term “availability,” two additional dimensions of server operating platform are used as well: reliability and scalability.

High availability refers to the ability of a server operating environment to provide end users an efficient access to data, applications and services for a high percentage of system uptime while minimizing both planned and unplanned downtime. As said before, the more “nines” in availability ratio, the higher availability is provided, the better business continuity is served.

High reliability refers to the ability of a server operating environment to minimize (reduce) the occurrences of system failures (both hardware and software) including some fault tolerance capabilities. Reliability goals are achieved by using standard redundant components and advanced fault tolerant solutions (hardware, systems software, application software). Butler and Gray (2006) underscore the question on how system reliability translates into reliable organizational performance. They identify the paradox of “relying on complex systems composed of unreliable components for reliable outcomes.”

High scalability refers to the ability of a server operating environment of scaling up and out. Servers can be scaled up by adding more processors, RAM, and so forth, while scaling out means additional computers and forming cluster or grid configurations.

Figure 1.

IDC classification of availability scenarios (Source: adapted from IDC, 2006)

978-1-60566-160-5.ch003.f01

According to IDC (2006), high availability systems are defined as having 99% or more uptime. IDC defined a set of different availability scenarios according to availability ratio, annual downtime, user tolerance to downtime with regard to several continuous computing technologies that are available today:

Dataquest Perspective (Gartner) provides a similar classification shown in Figure 2.

Figure 2.

Availability classification (Source: adapted from Dataquest Perspective—High Availability, http://www.itsmsolutions.com/newsletters/DITYvol2iss47.htm, accessed March 18, 2008)

978-1-60566-160-5.ch003.f02

Some vendors claim even “zero downtime.” Compaq (now part of HP), for instance, claimed in 2001 that 96% of its installed NonStop Himalaya servers had zero downtime. HP, for instance, claims that its Integrity NonStop servers can provide “seven-nines” 99.99999% availability, an uptime level which translates to less than three seconds of unplanned downtime a year (Boulton, 2005).

Aberdeen (2007) defined the accronym “Best in Class” (BIC) companies that leverage a high availability strategy:

  • a)

    the overall ability to recover critical applications (application data) in less than two hours, and

  • b)

    Year-over-year improvement in ability to recover data.

Complete Chapter List

Search this Book:
Reset