Decision Making in Intelligent Agents

Decision Making in Intelligent Agents

Mats Danielson, Love Ekenberg
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-59904-849-9.ch066
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

There are several ways of building complex distributed software systems, for example in the form of software agents. But regardless of the form, there are some common problems having to do with specification contra execution. One of the problems is the inherent dynamics in the environment many systems are exposed to. The properties of the environment are not known with any precision at the time of construction. This renders a specification of the system incomplete by definition. A traditional software agent is only prepared to handle situations conceived of and implemented at compile-time. Even though it can operate in varying contexts, its decision making abilities are static. One remedy is to prepare the distributed components for a truly dynamic environment, i.e. an environment with changing and somewhat unpredictable conditions. A rational software agent needs both a representation of a decision problem at hand and means for evaluation. AI has traditionally addressed some parts of this problem such as representation and reasoning, but has hitherto to a lesser degree addressed the decision making abilities of independent distributed software components (Ekenberg, 2000a, 2000b). Such decision making often has to be carried out under severe uncertainty regarding several parameters. Thus, methods for independent decision making components should be able to handle uncertainties on the probabilities and utilities involved. They have mostly been studied as means of representation, but are now being developed into functional theories of decision making suitable for dynamic use by software agents and other dynamic distributed components. Such a functional theory will also benefit analytical decision support systems intended to aid humans in their decision making. Thus, the generic term agent below stands for a dynamic software component as well as a human or a group of humans assisted by intelligent software.
Chapter Preview
Top

Background

Ramsey (1926/78) was the first to suggest a theory that integrated ideas on subjective probability and utility in presenting (informally) a general set of axioms for preference comparisons between acts with uncertain outcomes (probabilistic decisions). von Neumann and Morgenstern (1947) established the foundations for a modern theory of utility. They stated a set of axioms that they deemed reasonable to a rational decision-maker (such as an agent), and demonstrated that the agent should prefer the alternative with the highest expected utility, given that she acted in accordance with the axioms. This is the principle of maximizing the expected utility. Savage (1954/72) published a thorough treatment of a complete theory of subjective expected utility. Savage, von Neumann, and others structured decision analysis by proposing reasonable principles governing decisions and by constructing a theory out of them. In other words, they (and later many others) formulated a set of axioms meant to justify their particular attitude towards the utility principle, cf., e.g., Herstein and Milnor (1953), Suppes (1956), Jeffrey (1965/83), and Luce and Krantz (1971). In classical decision analysis, of the types suggested by Savage and others, a widespread opinion is that utility theory captures the concept of rationality.

After Raiffa (1968), probabilistic decision models are nowadays often given a tree representation (see Fig. 1). A decision tree consists of a root, representing a decision, a set of event nodes, representing some kind of uncertainty and consequence nodes, representing possible final outcomes. In the figure, the decision is a square, the events are circles, and final consequences are triangles. Events unfold from left to right, until final consequences are reached. There may also be more than one decision to make, in which case the sub-decisions are made before the main decision.

Figure 1.

Decision tree

978-1-59904-849-9.ch066.f01

Key Terms in this Chapter

Marginal Belief Distribution: Let a unit cube:B = (b1,...,bk) and F ? BD(B) be given. Furthermore, let Bi– = (b1,..., bi–1, bi+1,..., bk). Thenis a marginal belief distribution over the axis bi.

Decision Tree: A decision tree consists of a root node, representing a decision, a set of intermediate (event) nodes, representing some kind of uncertainty and consequence nodes, representing possible final outcomes. Usually, probability distributions are assigned in the form of weights in the probability nodes as measures of the uncertainties involved.

Admissible Alternative: Given a decision tree and two alternatives Ai and Aj, Ai is at least as good as Aj iff E(Ai)?– E(Aj)?>?0, where E(Ai) is the expected value of Ai, for all consistent variable assignments for the probabilities and values. Ai is better than Aj iff Ai is at least as good as Aj and E(Ai)?–?E(Aj)?>?0 for some consistent variable assignments for the probabilities and values. Ai is admissible iff no other Aj is better.

Projection: Let B = (b1,...,bk) and A = (bi1,...,bis): ij ? {1,...k} be unit cubes. Furthermore, let F ? BD(B), and let .Then fA is the projection of F on A. A projection of a belief distribution is also a belief distribution.

Joint Belief Distribution: Let a unit cube be represented by B = (b1,...,bk). By a joint belief distribution over B, we mean a positive distribution F defined on the unit cube B such that where VB is some k-dimensional Lebesque measure on B.

Expected Value: Given a decision tree with r alternatives Ai for i = 1,…,r, the expression E(Ai) = where , j ? (1,…,m), denote probability variables and denote value variables, is the expected value of alternative Ai.

Centroid: Given a belief distribution F over a cube B, the centroid Fc of F iswhere VB is some k-dimensional Lebesque measure on B.

Complete Chapter List

Search this Book:
Reset