Agent Technology

Agent Technology

J.J. Ch. Meyer
DOI: 10.4018/978-1-60566-026-4.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Agent technology is a rapidly growing subdiscipline of computer science on the borderline of artificial intelligence and software engineering that studies the construction of intelligent systems. It is centered around the concept of an (intelligent/rational/autonomous) agent. An agent is a software entity that displays some degree of autonomy; it performs actions in its environment on behalf of its user but in a relatively independent way, taking initiatives to perform actions on its own by deliberating its options to achieve its goal(s). The field of agent technology emerged out of philosophical considerations about how to reason about courses of action, and human action, in particular. In analytical philosophy there is an area occupied with so-called practical reasoning, in which one studies so-called practical syllogisms, that constitute patterns of inference regarding actions. By way of an example, a practical syllogism may have the following form (Audi, 1999, p. 728): Would that I exercise. Jogging is exercise. Therefore, I shall go jogging. Although this has the form of a deductive syllogism in the familiar Aristotelian tradition of “theoretical reasoning,” on closer inspection it appears that this syllogism does not express a purely logical deduction. (The conclusion does not follow logically from the premises.) It rather constitutes a representation of a decision of the agent (going to jog), where this decision is based on mental attitudes of the agent, namely, his/her beliefs (“jogging is exercise”) and his/her desires or goals (“would that I exercise”). So, practical reasoning is “reasoning directed toward action—the process of figuring out what to do,” as Wooldridge (2000, p. 21) puts it. The process of reasoning about what to do next on the basis of mental states such as beliefs and desires is called deliberation (see Figure 1). The philosopher Michael Bratman has argued that humans (and more generally, resource-bounded agents) also use the notion of an intention when deliberating their next action (Bratman, 1987). An intention is a desire that the agent is committed to and will try to fulfill till it believes it has achieved it or has some other rational reason to abandon it. Thus, we could say that agents, given their beliefs and desires, choose some desire as their intention, and “go for it.” This philosophical theory has been formalized through several studies, in particular the work of Cohen and Levesque (1990); Rao and Georgeff (1991); and Van der Hoek, Van Linder, and Meyer (1998), and has led to the so-called Belief- Desire-Intention (BDI) model of intelligent or rational agents (Rao & Georgeff, 1991). Since the beginning of the 1990s researchers have turned to the problem of realizing artificial agents. We will return to this hereafter.
Chapter Preview
Top

Introduction

Agent technology is a rapidly growing subdiscipline of computer science on the borderline of artificial intelligence and software engineering that studies the construction of intelligent systems. It is centered around the concept of an (intelligent/rational/autonomous) agent. An agent is a software entity that displays some degree of autonomy; it performs actions in its environment on behalf of its user but in a relatively independent way, taking initiatives to perform actions on its own by deliberating its options to achieve its goal(s).

The field of agent technology emerged out of philosophical considerations about how to reason about courses of action, and human action, in particular. In analytical philosophy there is an area occupied with so-called practical reasoning, in which one studies so-called practical syllogisms, that constitute patterns of inference regarding actions. By way of an example, a practical syllogism may have the following form (Audi, 1999, p. 728):

Would that I exercise.

Jogging is exercise.

Therefore, I shall go jogging.

Although this has the form of a deductive syllogism in the familiar Aristotelian tradition of “theoretical reasoning,” on closer inspection it appears that this syllogism does not express a purely logical deduction. (The conclusion does not follow logically from the premises.) It rather constitutes a representation of a decision of the agent (going to jog), where this decision is based on mental attitudes of the agent, namely, his/her beliefs (“jogging is exercise”) and his/her desires or goals (“would that I exercise”). So, practical reasoning is “reasoning directed toward action—the process of figuring out what to do,” as Wooldridge (2000, p. 21) puts it. The process of reasoning about what to do next on the basis of mental states such as beliefs and desires is called deliberation (see Figure 1). The philosopher Michael Bratman has argued that humans (and more generally, resource-bounded agents) also use the notion of an intention when deliberating their next action (Bratman, 1987). An intention is a desire that the agent is committed to and will try to fulfill till it believes it has achieved it or has some other rational reason to abandon it. Thus, we could say that agents, given their beliefs and desires, choose some desire as their intention, and “go for it.” This philosophical theory has been formalized through several studies, in particular the work of Cohen and Levesque (1990); Rao and Georgeff (1991); and Van der Hoek, Van Linder, and Meyer (1998), and has led to the so-called Belief-Desire-Intention (BDI) model of intelligent or rational agents (Rao & Georgeff, 1991). Since the beginning of the 1990s researchers have turned to the problem of realizing artificial agents. We will return to this hereafter.

Figure 1.

The deliberation process in a BDI architecture

978-1-60566-026-4.ch015.f01
Top

Background: The Definition Of Agenthood

Although there is no generally accepted definition of an agent, there is some consensus on the (possible) properties of an agent (Wooldridge, 2002; Wooldridge & Jennings, 1995): Agents are hardware or software-based computer systems that enjoy the properties of:

  • Autonomy: The agent operates without the direct intervention of humans or other agents and has some control over its own actions and internal state.

  • Reactivity: Agents perceive their environment and react to it in a timely fashion.

  • Pro-Activity: Agents take initiatives to perform actions and may set and pursue their own goals.

  • Social Ability: Agents interact with other agents (and humans) by communication; they may coordinate and cooperate while performing tasks.

Key Terms in this Chapter

Agent-Oriented Programming Language: AOP Language is a language that enables the programmer to program intelligent agents, as defined previously, (in the strong sense) in terms of agent-oriented (mentalistic) notions such as beliefs, goals, and plans.

Multi-Agent System (MAS)/Agent Society: MAS agent society is a collection of agents sharing the same environment and possibly also tasks, goals, and norms, and therefore are part of the same organization.

Electronic Institution: Electronic institution is a (sub)system to regulate the behavior of agents in a multi-agent system/agent society, in particular their interaction, in compliance with the norms in force in that society.

Pro-Active Agent: Pro-active agent is an agent that takes initiatives to perform actions and may set and pursue its own goals.

Agent-Oriented Software Engineering (AOSE): AOSE is the study of the construction of intelligent systems by the use of the agent paradigm, that is, using agent-oriented notions, in any high-level, programming language. In a strict sense: the study of the implementation of agent systems by means of agent-oriented programming languages.

Intelligent Agent: Intelligent agent is a software or hardware entity that displays (some degree of) autonomous, reactive, proactive, and social behavior; in a strong sense: an agent that possesses mental or cognitive attitudes, such as beliefs, desires, goals, intentions, plans, commitments, and so forth.

Social Agent: Social agent is an agent that interacts with other agents (and humans) by communication; it may coordinate and cooperate with other agents while performing tasks.

Believable Agent: Believable agent is an agent, typically occurring in a virtual environment that displays natural behavior, such that the user of the system may regard it as an entity that interacts with him/her in a natural way.

Agent-Oriented Programming (AOP): AOP is an approach to constructing agents by means of programming them in terms of mentalistic notions such as beliefs, desires, and intentions.

Reactive Agent: Reactive agent is an agent that perceives its environment and reacts to it in a timely fashion.

Complete Chapter List

Search this Book:
Reset