Creating and Analyzing Induced Decision Trees From Online Learning Data

Creating and Analyzing Induced Decision Trees From Online Learning Data

Copyright: © 2019 |Pages: 17
DOI: 10.4018/978-1-5225-7528-3.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Decision trees may be created in various ways. They may be manually drawn based on data, or they may be induced directly from data using supervised machine learning. Decision trees induced from online learning data may evoke insights that may benefit teaching and learning. This work introduces a method for inducing decision trees and addresses how to set the parameters for the trees based on particular decision making and research question making. This work uses online learning data to create decision trees and to enable practical insights from the resulting data visualizations.
Chapter Preview
Top

Introduction

At their most simple form, a decision tree is a branching structure that starts with an originating root node at the top. From that node, there are two or more branches that emanate from that node, and the branches show respective possible paths that may be taken from that starting node. In general, the paths are one-directional, from the root node through the respective branches towards the leaves (Figure 01).

Defined by function, decision trees are branching structures that are used in various ways. Manually drawn decision trees are read top-down, and they may be understood as models of decisions or decision paths that people may take related to a particular in-world phenomenon. At each decision juncture, there is at least a choice between two or more options, and the choices have repercussions in terms of follow-on choices and ultimate outcomes. Such decision-based sequences help capture dependencies. They are reductive, sparse, simple, and parsimonious, so as to enhance human thinking about a complex context and to increase the transferability of such decision trees to other contexts. Visually, such manually drawn decision trees are 2D diagrams consisting of top-down or left-right flows, with objects and arrowed lines (like nodes and links, vertices and edges). Such manually drawn trees may be created for a variety of reasons:

  • 1.

    To describe actual or theorized decision-making processes (and the implications of those decisions).

  • 2.

    To highlight critical points in human decision-making (and decision junctures which may be highly important).

  • 3.

    To introduce strategy in human decision-making (such as in drawing complete game trees to capture all possible moves in a context and all possible outcomes—with varying probabilities of outcomes; informed by game theory) (Dixit, Skeath, & Reiley, 1999, 2004, 2009, pp. 47 - 57).

  • 4.

    To capture data patterns, and others.

The underlying data supporting the drawing of decision trees may be conceptual or theoretical. For example, a “what-if” decision tree could be created for simplified disaster scenarios and various choices individuals may face and possible likely outcomes for respective decisions. Or a theorized model may be express as a decision tree to describe what is conceptually predicted based on particular decision-making around events that have not yet occurred in the real but might. Decision trees may be based on empirical data that map to real-world events and real-world probabilities. Fully defined decision trees are often used in game theory to define strategy and optimal outcomes and probable outcomes as decisions are made. Another form of decision tree may be induced from data using any of a range of decision tree algorithms.

Figure 1.

Some basic parts of a shallow binary decision tree

978-1-5225-7528-3.ch006.f01

This lesser-known form of decision tree involves those induced from data through machine learning (data mining, pattern identification). The first regression tree algorithm was created in 1963 (Loh, 2014), and in the intervening fifty-some years, decision trees (or “classification trees” or “regression trees”) have gone through many changes and improvements (Loh, 2014). In their classification application, decision trees may be used to identify relevant attributes that may predict labeled categories; in their regression application, decision trees may be used to identify partitions (or sets) based on continuous number outcomes. The induction of decision trees comes from the idea of “inductive learning,” or extracting understandings from evidence based on observed data patterns and interpreting those patterns. In induced decision trees, while patterns are surfaced, people play a critical role in interpreting and understanding the data.

Key Terms in this Chapter

Root Node: The top-most starting node of a decision-tree.

Maximal Depth: The top limit to the potential numbers of layers that a decision tree may be rendered.

Confidence Level: A statistical measure of assurance of the accuracy of a decision tree’s output, usually based around error rates.

Induction: The act of learning patterns from a set of evidence (through manual and computational means).

Data Structure: Way information is represented and stored.

Candidate Covariate: A variable that may be an attribute in a decision tree.

Classifier: A type of machine learning program that segments a set of cases into different classes or categorizations.

Predicted Outcome: The class that the decision tree predicts that a particular case belongs to (such as a “class” for a classification tree or a number for a regression tree).

Branch: A sequence of steps (through layers) leading to an outcome in a decision tree.

Validation: Checking for the efficacy of a predictive model.

Variable: A feature that can take a number of values.

Regression Decision Tree: A classifier with predicted outcomes expressed as real numbers.

Best Split Attribute: The variable or co-variate that best differentiates cases to particular categories or classes.

Terminal Node: The “answer” or end nodes (leaves) of a decision tree at the bottom of these trees.

Classification Decision Tree: A machine classifier that has predicted outcomes as classes or categories.

Machine Learning: The harnessing of statistical methods and computational methods to identify data patterns, without explicit programming.

Target Role Variable: The attribute identified as the classifications of interest in the particular decision tree.

Accuracy: A criterion for predictive models calculated by the ratio of correct predictions out of the number of evaluated cases.

Decision Tree: A predictive model induced from quantitative and qualitative (nominal, categorical) data.

Operator: Various processors in RapidMiner Studio including for data access, blending, cleansing, modeling, scoring, validation, utility, and extensions.

Criterion: A standard against which data may be split into different leaf nodes (including information gain, gain ratio, gini index, accuracy, least square, and others).

Complete Chapter List

Search this Book:
Reset