Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore

Philippe Smets (Université Libre de Bruxelle, Belgium)

Copyright: © 2009
|Pages: 5

DOI: 10.4018/978-1-60566-010-3.ch303

Chapter Preview

TopThis note is a very short presentation of the transferable belief model (TBM), a model for the representation of quantified beliefs based on belief functions. Details must be found in the recent literature.

The TBM covers the same domain as the subjective probabilities except probability functions are replaced by belief functions which are much more general. The model is much more flexible than the Bayesian one and allows the representation of states of beliefs not adequately represented with probability functions. The theory of belief functions is often called the Dempster-Shafer’s theory, but this term is unfortunately confusing.

Dempster-Shafer’s theory covers several models that use belief functions. Usually their aim is in the modeling of someone’s degrees of belief, where a degree of belief is understood as strength of opinion. They do not cover the problems of vagueness and ambiguity for which fuzzy sets theory and possibility theory are more appropriate.

Beliefs result from uncertainty. Uncertainty can result from a random process (the objective probability case), or from a lack of information (the subjective case). These two forms of uncertainty are usually quantified by probability functions.

Dempster-Shafer’s theory is an ambiguous term as it covers several models. One of them, the “transferable belief model” is a model for the representation of quantified beliefs developed independently of any underlying probability model. Based on Shafer’s initial work (Shafer, 1976) it has been largely extended since (Smets,1998; Smets & Kennes, 1994; Smets & Kruse, 1997).

Suppose a finite set of worlds Ω called the frame of discernment. The term “world” covers concepts like state of affairs, state of nature, situation, context, value of a variable... One world corresponds to the actual world. An agent, denoted You (but it might be a sensor, a robot, a piece of software), does not know which world corresponds to the actual world because the available data are imperfect. Nevertheless, You have some idea, some opinion, about which world might be the actual one. So for every subset A of Ω, You can express Your beliefs, i.e., the strength of Your opinion that the actual world belongs to A. This strength is denoted bel(A). The larger bel(A), the stronger You believe that the actual world belongs to A.

Intrinsically beliefs are not directly observable properties. Once a decision must be made, their impact can be observed.

In the TBM, we have described a two level mental model in order to distinguish between two aspects of beliefs, belief as weighted opinions, and belief for decision making (Smets, 2002a). The two levels are: the credal level, where beliefs are held, and the pignistic level, where beliefs are used to make decisions (credal and pignistic derive from the Latin words “credo”, I believe and “pignus”, a wage, a bet).

Usually these two levels are not distinguished and probability functions are used to quantify beliefs at both levels. Once these two levels are distinguished, as done in the TBM, the classical arguments used to justify the use of probability functions do not apply anymore at the credal level, where beliefs will be represented by belief functions. At the pignistic level, the probability function needed to compute expected utilities are called pignistic probabilities to enhance they do not represent beliefs, but are just induced by them.

The TBM is a model developed to represent quantified beliefs. The TBM departs from the Bayesian approach in that we do not assume that bel satisfies the additivity encountered in probability theory. We get inequalities like: bel(A∪B)

Search this Book:

Reset