Evidence-Based Uncertainty Modeling

Evidence-Based Uncertainty Modeling

Tazid Ali
DOI: 10.4018/978-1-4666-4991-0.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Evidence is the essence of any decision making process. However in any situation the evidences that we come across are usually not complete. Absence of complete evidence results in uncertainty, and uncertainty leads to belief. The framework of Dempster-Shafer theory which is based on the notion of belief is overviewed in this chapter. Methods of combining different sources of evidences are surveyed. Relationship of probability theory and possibility theory to evidence theory is exhibited. Extension of the classical Dempster-Shafer Structure to fuzzy setting is discussed. Finally uncertainty measurement in the frame work of Dempster-Shafer structure is dealt with.
Chapter Preview
Top

1. Introduction

Not to be absolutely certain is, I think, one of the essential things in rationality. -Bertrand Russell

We face uncertainty in our day-to-day life. Our understanding of the past and our anticipation of the future are tainted with uncertainty. Certainty is elusive in science. Uncertainty is an unavoidable aspect of any scientific investigation. Science as we know is modeling (of the Universe) and no model is an exact replication of the reality. So certain amount of uncertainty always persists. This is called model uncertainty. Again the input parameters of a model are usually not exactly or precisely known. Sometimes an input parameter exhibits randomness. Again sometimes we have insufficient knowledge about a parameter. The former kind of uncertainty is called aleatory uncertainty or variability and the latter form of uncertainty is called epistemic uncertainty. The uncertainty in the input parameters is transferred to the output parameter. This is called uncertainty propagation. We make decisions based on several inputs (information). But when the inputs are tainted with uncertainty there will be definitely uncertainty in our decision. Under ideal situation we will prefer to avoid/ignore uncertainty or eliminate it completely. However complete elimination of uncertainty is rarely possible and ignoring uncertainty may lead to over(under)conservative decisions. So, practical solution to this is to reduce uncertainty to the maximal level possible. For this one should have a clear idea of the nature/type of uncertainty a parameter is tainted with. The next step is to characterize or model the uncertainty. This is a very crucial step, because the uncertainty if not properly characterized will influence the output in an unreasonable way. This is because depending upon the model under consideration, one or more parameters may be highly sensitive in the sense that a slight change in its value/representation results in a relatively high change in the output. Since we will have to live with uncertainty, it is better to be aware of the amount of uncertainty involved in any of our conclusions so that realistic and reasonable decisions can be taken. Measuring the uncertainty involved in any process is called uncertainty quantification. Uncertainty quantification helps us to be aware of the risk involved in any of our decisions.

Probability theory has been an age old and effective tool for modeling one type of uncertainty, viz., randomness, i.e., processes in which occurrence of events is determined by chance. Till the middle of the 20th century it was an assumed fact that Probability theory can adequately deal with all kinds of uncertainty arising in science and engineering. However this accepted assumption gradually faded with the emergence of two important generalizations. One is the generalization of classical measure theory (Halmos, 1950) to the theory of generalized measure (also called regular monotone measure or fuzzy measure in literature). Generalized measure is obtained by weakening the additivity condition of classical measure. The other generalization is that of classical set theory to Fuzzy bet set theory (Zadeh, 1965). This generalization is obtained by weakening the requirement of sharp boundaries of classical set theory. Fuzzy set theory can handle uncertainty arising out of inherent imprecision in the language with which the problem is defined. As an extension of his theory of fuzzy sets and fuzzy logic, Zadeh developed possibility theory (Zadeh, 1978). Possibility theory can model uncertainty arising due to imprecise information. Each of the above theories, viz., probability theory, possibility theory and fuzzy set theory can handle one or the other form of uncertainty. So they are not suitable to model problems that deal with different forms of uncertainty. The theory of evidence comes in handy in such situations as discussed in this chapter.

Top

2. Basic Concepts Of Dempster-Shafer Theory (Dst)

The evidence theory or better known as the Dempster-Shafer theory has its origin in the Dempster’s work on upper and lower probabilities in the 1960s(Dempster, 1967a, 1967b) and Shafer’s work on belief functions in the 1970s. The advantage of this theory over other uncertainty theories is its ability to simultaneously model randomness and imprecision.

Complete Chapter List

Search this Book:
Reset