Using Rule-Based Concepts as Foundation for Higher-Level Agent Architectures

Using Rule-Based Concepts as Foundation for Higher-Level Agent Architectures

Lars Braubach, Alexander Pokahr, Adrian Paschke
DOI: 10.4018/978-1-60566-402-6.ch021
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Declarative programming using rules has advantages in certain application domains and has been successfully applied in many real world software projects. Besides building rule-based applications, rule concepts also provide a proven basis for the development of higher-level architectures, which enrich the existing production rule metaphor with further abstractions. One especially interesting application domain for this technology is the behavior specification of autonomous software agents, because rule bases help fulfilling key characteristics of agents such as reactivity and proactivity. This chapter details which motivations promote the usage of rule bases for agent behavior control and what kinds of approaches exist. Concretely, these approaches are in the context of four existing agent architectures (pure rule-based, AOP, Soar, BDI) and their implementations (Rule Responder, Agent-0 and successors, Soar, and Jadex). In particular, this chapter emphasizes in which respect these agent architectures make use of rules and with what mechanisms they extend the base functionality. Finally, the approaches are generalized by summarizing their core assumptions and extension mechanisms and possible further application domains besides agent architectures are presented.
Chapter Preview
Top

Background On Agents And Multi-Agent Systems

The field of agent technology emerged during the Nineties of the last century and has its roots in different areas of computer science such as artificial intelligence (AI), software engineering (SE), and distributed computing (Luck et al. 2005). In agent technology, an agent is seen as an independent software entity situated in an environment that is capable of controlling its own behavior (i.e. an agent can act without user intervention). Although agent technology is a very diverse field with many sometimes quite unrelated sub areas, general consensus exists, that agents can be ascribed the following set of properties (Wooldridge 2001):

  • Autonomy. An agent decides on its own, how to accomplish given tasks. This is also the case for agents that act on behalf of a user.

  • Reactivity. An agent continually monitors its environment and automatically reacts to changes in a timely manner, if necessary.

  • Proactivity. An agent does not only react to stimuli from the outside, but also starts new actions on its own in order to pursue its own (given) objectives.

  • Social ability. Agents are commonly situated in an environment composed of other (software or human) agents. For accomplishing their tasks, agents can engage in dialogs with other agents and interact in cooperative or competitive ways.

Key Terms in this Chapter

Mentalistic Notions / Mental Attitudes: The terms mentalistic notions resp. mental attitudes refer to human properties like beliefs and goals when used for describing software agents.

UTC: (Newell 1990) introduced the term Unified Theories of Cognition (UTC) as an objective for the psychology research field. A Unified Theories of Cognition should overcome the existing variety of psychological theories (over 3000 in 1990) and offer a unified explanation framework for human cognition.

Software Agent: Although a great variety of definitions of the term ‘software agent’ exists (Franklin and Graesser 1997), a widely accepted definition is given by Jennings and Wooldridge (1998, p. 4): “an agent is a computer system situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives”.

Physical Symbol System Hypothesis: The Physical Symbol System Hypothesis has been formulated by Newell and Simon (1976) and states that: “A physical symbol system has the necessary and sufficient means of general intelligent action.”

Agent Architecture: Following the definition of ‘software architecture’ from Bass et al. (2005) as “the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them”, we regard an ‘agent architecture’ as the control structures that facilitate the specification and execution of agent behavior.

BDI Model: The Belief-Desire-Intention (BDI) model of human practical reasoning, developed by the philosopher Michael Bratman (1987), is a model for assessing the rationality of human actions. Unlike earlier models, which only consider desires and beliefs, the BDI model introduces future-directed intentions, which are composed to plans, as an important and irreducible concept. In the agent research community, the philosophical model has been slightly adapted to specify the behavior of software agents in terms of beliefs, goals and plans.

Intentional Stance: The term ‘intentional stance’ was coined by philosopher Daniel C. Dennett (1971) and refers to a viewpoint in which we use (human) mental properties for explaining the behavior also of animals or inanimate things. McCarthy (1979) has argued that the intentional stance is also useful for developing software systems.

Deliberation Cycle: The term ‘deliberation cycle’ refers to a conceptual “main loop” of an agent interpreter to illustrate the basic mode of operation (e.g. “sense-reason-act”).

Multi-Agent System: (Wooldridge 2001, p. 3) defines: “A multi-agent system is on that consists of a number of agents, which interact with one another […].”

Complete Chapter List

Search this Book:
Reset