The first (crisp) decision tree techniques were introduced in the 1960s (Hunt, Marin, & Stone, 1966), their appeal to decision makers is due in no part to their comprehensibility in classifying objects based on their attribute values (Janikow, 1998). With early techniques such as the ID3 algorithm (Quinlan, 1979), the general approach involves the repetitive partitioning of the objects in a data set through the augmentation of attributes down a tree structure from the root node, until each subset of objects is associated with the same decision class or no attribute is available for further decomposition, ending in a number of leaf nodes. This article considers the notion of decision trees in a fuzzy environment (Zadeh, 1965). The first fuzzy decision tree (FDT) reference is attributed to Chang and Pavlidis (1977), which defined a binary tree using a branch-bound-backtrack algorithm, but limited instruction on FDT construction. Later developments included fuzzy versions of crisp decision techniques, such as fuzzy ID3, and so forth (see Ichihashi, Shirai, Nagasaka, & Miyoshi, 1996; Pal & Chakraborty, 2001) and other versions (Olaru & Wehenkel, 2003).
Key Terms in this Chapter
Membership Function: Mathematical function to grade the association of a value to a set.
Inductive Learning: The process of analysis through working with samples, which infers generalizations from the information in the data.
Branch: Path down a decision tree from the root node to a leaf node.
Subsethood: The degree to which the set A is a subset of the set B.
Root Node: First (top) node in a decision tree, from which all branches of the tree start from.
Decision Tree: A tree-like way of representing a collection of hierarchical decision rules that lead to a class or value, starting from a root node ending in a series of leaf nodes.
Leaf Node: Node at the end of a branch that discerns which decision class the associated branch classifies to.