Using Dempster-Shafer Theory in Data Mining

Using Dempster-Shafer Theory in Data Mining

Malcolm J. Beynon
Copyright: © 2009 |Pages: 8
DOI: 10.4018/978-1-60566-010-3.ch307
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The origins of Dempster-Shafer theory (DST) go back to the work by Dempster (1967) who developed a system of upper and lower probabilities. Following this, his student Shafer (1976), in their book “A Mathematical Theory of Evidence” developed Dempster’s work, including a more thorough explanation of belief functions, a more general term for DST. In summary, it is a methodology for evidential reasoning, manipulating uncertainty and capable of representing partial knowledge (Haenni & Lehmann, 2002; Kulasekere, Premaratne, Dewasurendra, Shyu, & Bauer, 2004; Scotney & McClean, 2003). The perception of DST as a generalisation of Bayesian theory (Shafer & Pearl, 1990), identifies its subjective view, simply, the probability of an event indicates the degree to which someone believes it. This is in contrast to the alternative frequentist view, understood through the “Principle of I sufficient reasoning”, whereby in a situation of ignorance a Bayesian approach is forced to evenly allocate subjective (additive) probabilities over the frame of discernment. See Cobb and Shenoy (2003) for a contemporary comparison between Bayesian and belief function reasoning. The development of DST includes analogies to rough set theory (Wu, Leung, & Zhang, 2002) and its operation within neural and fuzzy environments (Binaghi, Gallo, & Madella, 2000; Yang, Chen, & Wu, 2003). Techniques based around belief decision trees (Elouedi, Mellouli, & Smets, 2001), multi-criteria decision making (Beynon, 2002) and non-paramnteric regression (Petit-Renaud & Denoeux, 2004), utilise DST to allow analysis in the presence of uncertainty and imprecision. This is demonstrated, in this article, with the ‘Classification and Ranking belief Simplex’ (CaRBS) technique for object classification, see Beynon (2005a).
Chapter Preview
Top

Background

The terminology inherent with DST starts with a finite set of hypotheses Θ (the frame of discernment). A basic probability assignment (bpa) or mass value is a function m: 2Θ → [0, 1] such that m(∅) = 0 (∅ - the empty set) and 978-1-60566-010-3.ch307.m01 = 1 (2Θ - the power set of Θ). If the assignment m(∅) = 0 is not imposed then the transferable belief model can be adopted (Elouedi, Mellouli, & Smets, 2001; Petit-Renaud & Denœux, 2004). Any A ∈ 2Θ, for which m(A) is non-zero, is called a focal element and represents the exact belief in the proposition depicted by A. From a single piece of evidence, a set of focal elements and their mass values can be defined a body of evidence (BOE).

Based on a BOE, a belief measure is a function Bel: 2Θ → [0, 1], defined by, Bel(A) = 978-1-60566-010-3.ch307.m02, for all A ⊆ Θ. It represents the confidence that a specific proposition lies in A or any subset of A. The plausibility measure is a function Pls: 2Θ → [0, 1], defined by, Pls(A) = 978-1-60566-010-3.ch307.m03, for all A ⊆ Θ. Clearly Pls(A) represents the extent to which we fail to disbelieve A. these measures are directly related to one another, Bel(A) = 1 – PlsA) and Pls(A) = 1 – BelA), where ¬ A refers to its compliment ‘not A’.

Complete Chapter List

Search this Book:
Reset