Decision Tree Induction

Decision Tree Induction

Roberta Siciliano (University of Naples, Federico II, Italy) and Claudio Conversano (University of Cagliari, Italy)
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-60566-010-3.ch098
OnDemand PDF Download:
$37.50

Abstract

Decision Tree Induction (DTI) is a tool to induce a classification or regression model from (usually large) datasets characterized by n objects (records), each one containing a set x of numerical or nominal attributes, and a special feature y designed as its outcome. Statisticians use the terms “predictors” to identify attributes and “response variable” for the outcome. DTI builds a model that summarizes the underlying relationships between x and y. Actually, two kinds of model can be estimated using decision trees: classification trees if y is nominal, and regression trees if y is numerical. Hereinafter we refer to classification trees to show the main features of DTI. For a detailed insight into the characteristics of regression trees see Hastie et al. (2001). As an example of classification tree, let us consider a sample of patients with prostate cancer on which data Figure 1. The prostate cancer dataset such as those summarized in Figure 1 have been collected. Suppose a new patient is observed and we want to determine if the tumor has penetrated the prostatic capsule on the basis of the other available information. Posing a series of questions about the characteristic of the patient can help to predict the tumor’s penetration. DTI proceeds in such a way, inducing a series of follow- up (usually binary) questions about the attributes of an unknown instance until a conclusion about what is its most likely class label is reached. Questions and their alternative answers can be represented hierarchically in the form of a decision tree, such as the one depicted in Figure 2.
Chapter Preview
Top

Introduction

Decision Tree Induction (DTI) is a tool to induce a classification or regression model from (usually large) datasets characterized by n objects (records), each one containing a set x of numerical or nominal attributes, and a special feature y designed as its outcome. Statisticians use the terms “predictors” to identify attributes and “response variable” for the outcome. DTI builds a model that summarizes the underlying relationships between x and y. Actually, two kinds of model can be estimated using decision trees: classification trees if y is nominal, and regression trees if y is numerical. Hereinafter we refer to classification trees to show the main features of DTI. For a detailed insight into the characteristics of regression trees see Hastie et al. (2001).

As an example of classification tree, let us consider a sample of patients with prostate cancer on which data such as those summarized in Figure 1 have been collected. Suppose a new patient is observed and we want to determine if the tumor has penetrated the prostatic capsule on the basis of the other available information. Posing a series of questions about the characteristic of the patient can help to predict the tumor’s penetration. DTI proceeds in such a way, inducing a series of follow-up (usually binary) questions about the attributes of an unknown instance until a conclusion about what is its most likely class label is reached. Questions and their alternative answers can be represented hierarchically in the form of a decision tree, such as the one depicted in Figure 2.

Figure 1.

The prostate cancer dataset

Figure 2.

An illustrative example of a decision tree for the prostate cancer classification

The decision tree contains a root node and some internal and terminal nodes. The root node and the internal ones are used to partition instances of the dataset into smaller subsets of relatively homogeneous classes. To classify a previously unlabelled instance, say , we start from the test condition in the root node and follow the appropriate pattern based on the outcome of the test. When an internal node is reached a new test condition is applied, and so on down to a terminal node. Encountering a terminal node, the modal class of the instances of that node is the class label of . Going back to the prostate cancer classification problem, a new subject presenting a prostatic specific antigen value lower than 4.75, and an unilobar nodule on the left side will be classified as “no penetration”. It is evident that decision trees can easily be converted into IF-THEN rules and used for decision making purposes.

Top

Background

DTI is useful for data mining applications because of the possibility to represent functions of numerical and nominal attributes as well as of its feasibility, predictive ability and interpretability. It can effectively handle missing values and noisy data and can be used either as an explanatory tool for distinguishing objects of different classes or as a prediction tool to class labels of previously unseen objects.

Some of the well-known DTI algorithms include ID3 (Quinlan, 1983), CART (Breiman et al., 1984), C4.5 (Quinlan, 1993), FAST (Mola and Siciliano, 1997), SLIQ (Metha et al., 1997) and GUIDE (Loh, 2002). All these algorithms use a greedy, top-down recursive partitioning approach. They primarily differ in terms of the splitting criteria, the type of splits (2-way or multi-way) and the handling of the overfitting problem.

Complete Chapter List

Search this Book:
Reset