Introduction to Bayesian Networks and Influence Diagrams

Introduction to Bayesian Networks and Influence Diagrams

Luis Enrique Sucar (National Institute for Astrophysics, Optics and Electronics, Mexico)
DOI: 10.4018/978-1-60960-165-2.ch002
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

In this chapter we will cover the fundamentals of probabilistic graphical models, in particular Bayesian networks and influence diagrams, which are the basis for some of the techniques and applications that are described in the rest of the book. First we will give a general introduction to probabilistic graphical models, including the motivation for using these models, and a brief history and general description of the main types of models. We will also include a brief review of the basis of probability theory. The core of the chapter will be the next three sections devoted to: (i) Bayesian networks, (ii) Dynamic Bayesian networks and (iii) Influence diagrams. For each we will introduce the models, their properties and give some examples. We will briefly describe the main inference techniques for the three types of models. For Bayesian and dynamic Bayesian nets we will talk about learning, including structure and parameter learning, describing the main types of approaches. At the end of the section on influence diagrams we will briefly introduce sequential decision problems as a link to the chapter on MDPs and POMDPs. We conclude the chapter with a summary and pointers for further reading for each topic.
Chapter Preview
Top

Introduction

We start by motivating the use of probabilistic graphical models and explaining their relevance to artificial intelligence. After a brief review of the basis of probability theory, we introduce the different types of models that will be covered in this book, which will be described in more detail later.

Motivation

Several important problems in Artificial Intelligence (AI), such as diagnosis, recognition, planning, among others, have to deal with uncertainty. Probability theory provides a well established foundation for managing uncertainty, so it is natural to use it for reasoning under uncertainty in AI. However, if we apply probability in a naive way to the complex problems that are frequent in AI, we are soon deterred by computational complexity. For instance, assume that we want to build a probabilistic system for medical diagnosis, and we consider 10 possible diseases and 20 different symptoms; for simplicity assume that each symptom is represented as a binary variable (present or absent). One way to estimate the probability of certain disease, Di, given a set of symptoms S1,S2,…,S20, is to use Bayes’ rule (see section on Probability Theory):

(1)

The second term in the right hand of this equation will require storing a table of 10 × 220 or approx. 10 million probabilities, which besides the memory problems, are very difficult to obtain either from an expert or from data. A way to simplify this problem is the common assumption that all the symptoms are independent given the disease, so we can apply instead what is called the Naive Bayes Classifier:

(2)

In this case the number of required probabilities reduces drastically for the same term, as we require 20×(10×2) entries, or 400 probability values (200 independent values, as some values can be deduced from others given the axioms of probability theory). However, the independence assumptions may not be valid, so the results could be not as good, and in some cases very bad!

Probabilistic Graphical Models (PGMs) provide a middle ground between these two extremes. The basic idea is to consider only those independence relations that are valid for certain problem, and include these in the probabilistic model to reduce complexity in terms of memory requirements and also computational time. A natural way to represent the dependence and independence relations between a set of variables is using graphs, such that variables that are directly dependent are connected; and the independence relations are implicit in this dependency graph. Before we go into a more formal definition of PGMs and discuss the different types of models, we will review some basic concepts in probability theory.

Complete Chapter List

Search this Book:
Reset