Pseudo-Independent Models and Decision Theoretic Knowledge Discovery

Pseudo-Independent Models and Decision Theoretic Knowledge Discovery

Yang Xiang
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-60566-010-3.ch249
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Graphical models such as Bayesian networks (BNs) (Pearl, 1988; Jensen & Nielsen, 2007) and decomposable Markov networks (DMNs) (Xiang, Wong., & Cercone, 1997) have been widely applied to probabilistic reasoning in intelligent systems. Knowledge representation using such models for a simple problem domain is illustrated in Figure 1: Virus can damage computer files and so can a power glitch. Power glitch also causes a VCR to reset. Links and lack of them convey dependency and independency relations among these variables and the strength of each link is quantified by a probability distribution. The networks are useful for inferring whether the computer has virus after checking files and VCR. This chapter considers how to discover them from data. Discovery of graphical models (Neapolitan, 2004) by testing all alternatives is intractable. Hence, heuristic search are commonly applied (Cooper & Herskovits, 1992; Spirtes, Glymour, & Scheines, 1993; Lam & Bacchus, 1994; Heckerman, Geiger, & Chickering, 1995; Friedman, Geiger, & Goldszmidt, 1997; Xiang, Wong, & Cercone, 1997). All heuristics make simplifying assumptions about the unknown data-generating models. These assumptions preclude certain models to gain efficiency. Often assumptions and models they exclude are not explicitly stated. Users of such heuristics may suffer from such exclusion without even knowing. This chapter examines assumptions underlying common heuristics and their consequences to graphical model discovery. A decision theoretic strategy for choosing heuristics is introduced that can take into account a full range of consequences (including efficiency in discovery, efficiency in inference using the discovered model, and cost of inference with an incorrectly discovered model) and resolve the above issue.
Chapter Preview
Top

Introduction

Graphical models such as Bayesian networks (BNs) (Pearl, 1988; Jensen & Nielsen, 2007) and decomposable Markov networks (DMNs) (Xiang, Wong., & Cercone, 1997) have been widely applied to probabilistic reasoning in intelligent systems. Knowledge representation using such models for a simple problem domain is illustrated in Figure 1: Virus can damage computer files and so can a power glitch. Power glitch also causes a VCR to reset. Links and lack of them convey dependency and independency relations among these variables and the strength of each link is quantified by a probability distribution. The networks are useful for inferring whether the computer has virus after checking files and VCR. This chapter considers how to discover them from data.

Figure 1.

(a) An example BN (b) A corresponding DMN

978-1-60566-010-3.ch249.f01

Discovery of graphical models (Neapolitan, 2004) by testing all alternatives is intractable. Hence, heuristic search are commonly applied (Cooper & Herskovits, 1992; Spirtes, Glymour, & Scheines, 1993; Lam & Bacchus, 1994; Heckerman, Geiger, & Chickering, 1995; Friedman, Geiger, & Goldszmidt, 1997; Xiang, Wong, & Cercone, 1997). All heuristics make simplifying assumptions about the unknown data-generating models. These assumptions preclude certain models to gain efficiency. Often assumptions and models they exclude are not explicitly stated. Users of such heuristics may suffer from such exclusion without even knowing. This chapter examines assumptions underlying common heuristics and their consequences to graphical model discovery. A decision theoretic strategy for choosing heuristics is introduced that can take into account a full range of consequences (including efficiency in discovery, efficiency in inference using the discovered model, and cost of inference with an incorrectly discovered model) and resolve the above issue.

Top

Background

A graphical model encodes probabilistic knowledge about a problem domain concisely (Pearl, 1988; Jensen & Nielsen, 2007). Figure 1 illustrates a BN in (a) and a DMN in (b). Each node corresponds to a binary variable. The graph encodes dependence assumptions among these variables, e.g., that f is directly dependent on v and p, but is independent of r once the value of p is observed. Each node in the BN is assigned a conditional probability distribution (CPD) conditioned on its parent nodes, e.g., P(f | v, p) to quantify the uncertain dependency. The joint probability distribution (JPD) for the BN is uniquely defined by the product P(v, p, f, r) = P(f | v, p) P(r | p) P(v) P(p). The DMN has two groups of nodes that are maximally pairwise connected, called cliques. Each is assigned a probability distribution, e.g., {v, p, f} is assigned P(v, p, f). The JPD for the DMN is P(v, p, f) P(r, p) / P(p).

When discovering such models from data, it is important that the dependence and independence relations expressed by the graph approximate true relations of the unknown data-generating model. How accurately can a heuristics do so depends on its underlying assumptions.

To analyze assumptions underlying common heuristics, we introduce key concepts for describing dependence relations among domain variables in this section. Let V be a set of discrete variables {x1, …, xn}. Each xi has a finite space Sxi = {xi,j | 1≤j≤Di}. When there is no confusion, we writexi,jasxij. The space of a set 978-1-60566-010-3.ch249.m01 of variables is the Cartesian product SX = . 978-1-60566-010-3.ch249.m02 Each element in SX is a configuration of X, denoted by x = (x1, …, xn). A probability distribution P(X) specifies the probability P(x) = P(x1, …, xn) for each x . P(V) is the JPD and P(X) (978-1-60566-010-3.ch249.m03) is amarginal distribution.A probabilistic domain model (PDM) over V defines P(X) for every. 978-1-60566-010-3.ch249.m04

Complete Chapter List

Search this Book:
Reset