Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore.

Additionally, libraries can receive an extra 5% discount. Learn More

Additionally, libraries can receive an extra 5% discount. Learn More

Luis Enrique Sucar (National Institute for Astrophysics, Optics and Electronics, Mexico)

Source Title: Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions

Copyright: © 2012
|Pages: 24
DOI: 10.4018/978-1-60960-165-2.ch002

Chapter Preview

TopWe start by motivating the use of probabilistic graphical models and explaining their relevance to artificial intelligence. After a brief review of the basis of probability theory, we introduce the different types of models that will be covered in this book, which will be described in more detail later.

Several important problems in Artificial Intelligence (AI), such as diagnosis, recognition, planning, among others, have to deal with *uncertainty*. Probability theory provides a well established foundation for managing uncertainty, so it is natural to use it for reasoning under uncertainty in AI. However, if we apply probability in a naive way to the complex problems that are frequent in AI, we are soon deterred by computational complexity. For instance, assume that we want to build a probabilistic system for medical diagnosis, and we consider 10 possible diseases and 20 different symptoms; for simplicity assume that each symptom is represented as a binary variable (present or absent). One way to estimate the probability of certain disease, *D _{i}*, given a set of symptoms

The second term in the right hand of this equation will require storing a table of 10 × 2^{20} or approx. 10 million probabilities, which besides the memory problems, are very difficult to obtain either from an expert or from data. A way to simplify this problem is the common assumption that all the symptoms are independent given the disease, so we can apply instead what is called the *Naive Bayes Classifier*:

In this case the number of required probabilities reduces drastically for the same term, as we require 20×(10×2) entries, or 400 probability values (200 independent values, as some values can be deduced from others given the axioms of probability theory). However, the independence assumptions may not be valid, so the results could be not as good, and in some cases very bad!

*Probabilistic Graphical Models* (PGMs) provide a middle ground between these two extremes. The basic idea is to consider only those independence relations that are valid for certain problem, and include these in the probabilistic model to reduce complexity in terms of memory requirements and also computational time. A natural way to represent the dependence and independence relations between a set of variables is using graphs, such that variables that are *directly* dependent are connected; and the independence relations are implicit in this *dependency graph*. Before we go into a more formal definition of PGMs and discuss the different types of models, we will review some basic concepts in probability theory.

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved