L. Enrique Sucar (National Institute for Astrophysics, Optics and Electronics, Mexico), Eduardo Morales (National Institute for Astrophysics, Optics and Electronics, Mexico) and Jesse Hoey (University of Waterloo, Canada)
DOI: 10.4018/978-1-60960-165-2.ch001


This chapter gives a general introduction to decision-theoretic models in artificial intelligence and an overview of the rest of the book. It starts by motivating the use of decision-theoretic models in artificial intelligence and discussing the challenges that arise as these techniques are applied to develop intelligent systems for complex domains. Then it introduces decision theory, including its axiomatic bases and the principle of maximum expected utility; a brief introduction to decision trees is also presented. Finally, an overview of the three main parts of the book –fundamentals, concepts and solutions– is presented.
Chapter Preview

Artificial Intelligence And Decision Theory

For achieving their goals, intelligent agents, natural or artificial, have to select a course of actions among many possibilities. That is, they have to take decisions based on the information they can obtain from their environment, their previous knowledge and their objectives. In many cases, the information and knowledge is incomplete or unreliable, and the results of their decisions are not certain, that is they have to take decisions under uncertainty. For instance: a medical doctor in an emergency, must act promptly even if she has limited information on the patient’s state; an autonomous vehicle that detects what might be an obstacle in its way, must decide if it should turn or stop without being certain about the obstacle’s distance, size and velocity; or a financial agent needs to select the best investment according to its vague predictions on expected return of the different alternatives and its clients’ requirements. In all these cases, the agent should try to make the best decision based on limited information and resources (time, computational power, etc.). How can we determine which is the best decision?

Decision Theory provides a normative framework for decision making under uncertainty. It is based on the concept of rationality, that is that an agent should try to maximize its utility or minimize its costs. This assumes that there is some way to assign utilities (usually a number, that can correspond to monetary value or any other scale) to the result of each alternative action, such that the best decision is the one that has the highest utility. For example, if we wanted to select in which stock to invest $1000 for a year, and we knew which will be the price of each stock after a year, then we should invest in the stock that will provide the highest return. Of course we cannot predict with precision the price of stocks a year in advance, and in general we are not sure of the results of each of the possible decisions, so we need to take this into account when we calculate the value of each alternative. So in decision theory we consider the expected utility, which makes an average of all the possible results of a decision, weighted by their probability. Thus, in a nutshell, a rational agent must select the decision that maximizes its expected utility.

Decision theory was initially developed in economics and operations research (Neumann & Morgenstern, 1944), but in recent years has attracted the attention of artificial intelligent (AI) researchers interested in understanding and building intelligent agents. These intelligent agents, such as robots, financial advisers, intelligent tutors, etc., must deal with similar problems as those encountered in economics and operations research, but with two main differences.

One difference has to do with the size of the problems, which in artificial intelligence tend to be much larger, in general, than in traditional applications in economics; with many more possible states of the environment and in some cases also a larger number of actions or decisions for the agent. For example, consider a mobile robot that is moving in a large building and wants to decide the best set of movements that will take it from one place in the building to another. In this case the world state could be represented as the position of the robot in the building, and the agent’s actions are the set of possible motions (direction and velocity) of the robot. So the problem can be formulated as the selection of the best motion for each position in the building to reach the goal position (minimizing distance or time, for instance). In this case the number of states and actions are in principle infinite, or very large if we discretize them. The size of the problems in terms of states and actions imply a problem of computational complexity, in terms of space and time, so AI has to deal with these issues in order to apply decision theory to complex scenarios.

The other main difference has to do with knowledge about the problem domain, that is having a model of the problem according to what is required to apply decision theory techniques to solve it. This means, in general, knowledge of all the possible domain states and possible actions, and the probability of each outcome of a decision and its corresponding utility1. In many AI applications a model is not known in advance, and could be difficult to obtain. Returning to the robot navigation example, the robot might not have a precise model of the environment and might also lack a detailed model of its dynamics to exactly predict which will be its position after each movement. So AI researchers have to deal also with the problem of knowledge acquisition or learning.

Complete Chapter List

Search this Book: