Object Classification Using CaRBS

Object Classification Using CaRBS

Malcolm J. Beynon
DOI: 10.4018/978-1-60566-026-4.ch455
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The notion of uncertain reasoning has grown relative to the power and intelligence of computers. From sources which are uncertain information and/or imprecise data, it is importantly the ability to represent uncertainty and reason about it (Shafer & Pearl, 1990). A very general problem of uncertain reasoning is how to combine information from independent and partially reliable sources (Haenni & Hartmann,forthcoming). With data mining, understanding the confirming and/or conflicting information from characteristics describing objects classified to given hypotheses is affected by their reliability. Further, the presence of missing values compounds the problem, since the reasons for their presence may be external to the incumbent reliability issues (Olinsky, Chen, & Harlow, 2003; West, 2001). These issues are demonstrated here using the classification technique: Classification and Ranking Belief Simplex (CaRBS), introduced in Beynon and Buchanan (2004) and Beynon (2005). CaRBS operates within the domain of uncertain reasoning, namely in its accommodation of ignorance, due to its mathematical structure based on the Dempster-Shafer theory of evidence (DST) (Srivastava & Mock, 2002). The ignorance here encapsulates incompleteness of the data set (presence of missing values), as well as uncertainty in the evidential support of characteristics to the final classification of the objects. This chapter demonstrates that a technique such as CaRBS, through uncertain reasoning, is able to uniquely manage the presence of missing values by considering them as a manifestation of ignorance, as well as allowing the possible unreliability of characteristics to be inherent. Importantly, the described process removes the need to falsely transform the data set in any way, such as through imputation (Huisman, 2000). The example issue of credit ratings considered here has become increasingly influential since its introduction in around 1900 with the Manual of Industrial and Miscellaneous Securities (Levich, Majnoni, & Reinhart, 2002). The rating agencies shroud their operations in particular secrecy, stating that statistical models cannot be used to replicate their ratings (Singleton & Surkan, 1991), hence advocating the need for alternative analyses, including those utilising uncertain reasoning.
Chapter Preview
Top

Background

DST is a methodology for evidential reasoning, manipulating uncertainty, and capable of representing partial knowledge (Kulasekere, Premaratne, Dewasurendra, Shyu, & Bauer, 2004; Scotney & McClean, 2003). Early after its introduction it was considered as a generalisation of Bayesian theory.

The traditional terminology within DST begins with a finite set of hypotheses Θ (frame of discernment). A mass value (basic probability assignment) is a function m: 2Θ → [0, 1] such that: m(∅) = 0 and ∑A∈2Θm(A) = 1 (2Θ the power set of Θ). Any A ∈ 2Θ, for which m(A) > 0 is called a focal element and represents the exact belief in the proposition depicted by A. From one source of evidence, a set of focal elements (and mass values) is defined as a body of evidence (BOE).

To collate two or more sources of evidence, DST provides a method to combine them, using Dempster’s rule of combination. If m1(⋅) and m2(⋅) are independent BOEs, then the function m1m2: 2Θ → [0, 1], defined by:

978-1-60566-026-4.ch455.m01
where k = ∑A∩Β=∅m1(A)m2(B), is a mass value associated with y ⊆ Θ. The term (1 − k) can be interpreted as a measure of conflict between the sources (Murphy, 2000) and is made up of one minus the sum of the products of mass values from the two pieces of evidence with empty intersection (often the k is also called the level of conflict and not 1 − k). The associated problem with conflict is the larger the value of k the more conflict in the sources of evidence, and subsequently the less sense there is in their combination (Murphy, 2000).

Key Terms in this Chapter

Evolutionary Algorithm: An algorithm that incorporates aspects of natural selection or survival of the fittest.

Uncertain Reasoning: The attempt to represent uncertainty and reason about it when using uncertain knowledge, imprecise information, and so forth.

Focal Element: A finite nonempty set of hypotheses.

Confidence Value: A function to transform a value into a standard domain, such as between 0 and 1.

Mass Values: A positive function of the level of exact belief in the associated proposition (focal element).

Objective Function: A positive function of the difference between predictions and data estimates that are chosen so as to optimise the function or criterion.

Imputation: Replacement of a missing value by a surrogate.

Simplex Plot: Equilateral triangle domain representation of triplets of nonnegative values which sum to one.

Equivalence Class: A set of objects considered the same subject to an equivalence relation (e.g., those objects classified to x ).

Complete Chapter List

Search this Book:
Reset