Yang Xiang (University of Guelph, Canada)

Copyright: © 2009
|Pages: 7

DOI: 10.4018/978-1-60566-010-3.ch249

Chapter Preview

TopGraphical models such as Bayesian networks (BNs) (Pearl, 1988; Jensen & Nielsen, 2007) and decomposable Markov networks (DMNs) (Xiang, Wong., & Cercone, 1997) have been widely applied to probabilistic reasoning in intelligent systems. Knowledge representation using such models for a simple problem domain is illustrated in Figure 1: Virus can damage computer files and so can a power glitch. Power glitch also causes a VCR to reset. Links and lack of them convey dependency and independency relations among these variables and the strength of each link is quantified by a probability distribution. The networks are useful for inferring whether the computer has virus after checking files and VCR. This chapter considers how to discover them from data.

Discovery of graphical models (Neapolitan, 2004) by testing all alternatives is intractable. Hence, heuristic search are commonly applied (Cooper & Herskovits, 1992; Spirtes, Glymour, & Scheines, 1993; Lam & Bacchus, 1994; Heckerman, Geiger, & Chickering, 1995; Friedman, Geiger, & Goldszmidt, 1997; Xiang, Wong, & Cercone, 1997). All heuristics make simplifying assumptions about the unknown data-generating models. These assumptions preclude certain models to gain efficiency. Often assumptions and models they exclude are not explicitly stated. Users of such heuristics may suffer from such exclusion without even knowing. This chapter examines assumptions underlying common heuristics and their consequences to graphical model discovery. A decision theoretic strategy for choosing heuristics is introduced that can take into account a full range of consequences (including efficiency in discovery, efficiency in inference using the discovered model, and cost of inference with an incorrectly discovered model) and resolve the above issue.

TopA graphical model encodes probabilistic knowledge about a problem domain concisely (Pearl, 1988; Jensen & Nielsen, 2007). Figure 1 illustrates a BN in (a) and a DMN in (b). Each node corresponds to a binary variable. The graph encodes dependence assumptions among these variables, e.g., that *f* is directly dependent on *v* and *p*, but is independent of *r* once the value of *p* is observed. Each node in the BN is assigned a conditional probability distribution (CPD) conditioned on its parent nodes, e.g., *P(f | v, p)* to quantify the uncertain dependency. The joint probability distribution (JPD) for the BN is uniquely defined by the product *P(v, p, f, r) = P(f | v, p) P(r | p) P(v) P(p)*. The DMN has two groups of nodes that are maximally pairwise connected, called *cliques.* Each is assigned a probability distribution, e.g., *{v, p, f}* is assigned *P(v, p, f)*. The JPD for the DMN is *P(v, p, f) P(r, p) / P(p)*.

When discovering such models from data, it is important that the dependence and independence relations expressed by the graph approximate true relations of the unknown data-generating model. How accurately can a heuristics do so depends on its underlying assumptions.

To analyze assumptions underlying common heuristics, we introduce key concepts for describing dependence relations among domain variables in this section. Let *V* be a set of discrete variables {*x _{1}*, …,

Search this Book:

Reset

Copyright © 1988-2019, IGI Global - All Rights Reserved