Theoretical Foundations of Autonomic Computing

Theoretical Foundations of Autonomic Computing

Copyright: © 2009 |Pages: 16
DOI: 10.4018/978-1-60566-170-4.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Autonomic computing (AC) is an intelligent computing approach that autonomously carries out robotic and interactive applications based on goal- and inference-driven mechanisms. This chapter attempts to explore the theoretical foundations and technical paradigms of AC. It reviews the historical development that leads to the transition from imperative computing to AC. It surveys transdisciplinary theoretical foundations for AC such as those of behaviorism, cognitive informatics, denotational mathematics, and intelligent science. On the basis of this work, a coherent framework towards AC may be established for both interdisciplinary theories and application paradigms, which will result in the development of new generation computing architectures and novel information processing systems.
Chapter Preview
Top

Introduction1

Autonomic computing (AC) is a mimicry and simulation of the natural intelligence possessed by the brain using generic computers. This indicates that the nature of software in AC is the simulation and embodiment of human behaviors, and the extension of human capability, reachability, persistency, memory, and information processing speed.

The history towards AC may be traced back to the work on automata by Norbert Wiener, John von Neumann, Alan Turing, and Claude E. Shannon as early as in the 1940s (Wiener, 1948; von Neumann, 1946/58/63/66; Turing, 1950; Shannon, 1956; Rabin and Scott, 1959). In the same period, Warren McCulloch proposed the term of artificial intelligence (AI) (McCulloch, 1943/65/93), and S.C. Kleene analyzed the relations of automata and nerve nets (Kleene, 1956). Then, Bernard Widrow developed the technology of artificial neural networks in the 1950s (Widrow and Lehr, 1990). The concepts of robotics (Brooks, 1970) and expert systems (Giarrantans and Riley, 1989) were developed in the 1970s and 1980s, respectively. Then, intelligent systems (Meystel and Albus, 2002) and software agents (Negreponte, 1995; Jennings, 2000) emerged in the 1990s. These events and developments lead to the formation of the concept of AC.

AC was first proposed by IBM in 2001 where it is perceived that “AC is an approach to self-managed computing systems with a minimum of human interference. The term derives from the body’s autonomic nervous system, which controls key functions without conscious awareness or involvement (IBM, 2001).” Various studies on AC have been reported following the IBM initiative (Pescovitz, 2002; Kephart and Chess, 2003; Murch, 2004). The cognitive informatics foundations of AC have been revealed in (Wang, 2002a/03a/03b/04/06b/06f/ 07a/07c; Wang and Kinsner, 2006). A paradigm of AC in term of cognitive machine has been surveyed in (Kinsner, 2007) and investigated in (Wang, 2006a; Wang, 2007b).

Based on cognitive informatics theories (Wang, 2002a; Wang, 2003a; Wang, 2007b), AC is proposed as a new and advanced technology for computing built upon the routine, algorithmic, and adaptive systems as shown in Table 1.

Table 1.
Classification of computing methodologies and systems
Behavior (O)
ConstantVariable
Event (I)ConstantRoutineAdaptive
VariableAlgorithmicAutonomic
Type of behaviorDeterministicNondeterministic

Complete Chapter List

Search this Book:
Reset