Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore

Sotiris Kotsiantis (University of Patras, Greece and University of Peloponnese, Greece) and Panayotis Pintelas (University of Patras, Greece and University of Peloponnese, Greece)

Copyright: © 2009
|Pages: 6

DOI: 10.4018/978-1-60566-026-4.ch495

Chapter Preview

TopA brief review of what ML includes can be found in Dutton and Conroy (1996). A historical survey of logic and instance-based learning is also presented in De Mantaras and Armengol (1998). The first step of predictive data mining is collecting the data set. If a requisite expert is available, then he or she can suggest which fields (attributes, features) are the most informative. If not, then the simplest method is that of “brute force,” which means measuring everything available in the hope that the right (informative, relevant) features can be isolated. However, a data set collected by the brute-force method is not directly suitable for induction. It contains, in most cases, noise and missing feature values, and therefore requires significant preprocessing (Zhang, Zhang, & Yang, 2002). Hodge and Austin (2004) have recently introduced a survey of contemporary techniques for outlier (noise) detection. Depending on the circumstances, researchers have a number of methods to choose from to handle missing data (Batista & Monard, 2003). Feature subset selection is the process of identifying and removing as many irrelevant and redundant features as possible (Yu & Liu, 2004). The fact that many features depend on one another often unduly influences the accuracy of supervised ML models. This problem can be addressed by constructing new features from the basic feature set (Markovitch & Rosenstein, 2002).

The problem of regression consists of obtaining a functional model that relates the value of a target continuous-variable *y* with the values of variables *x _{1}, x_{2}...x_{n}* (the predictors). This model is obtained using samples of the unknown regression function. These samples describe different mappings between the predictor and the target variables. The traditional approach for prediction of a continuous target is the classical linear least-squares regression (Fox, 1997). The model constructed for regression in this traditional approach is a linear equation. By estimating the parameters of this equation with a computationally simple process on the training set, a model is created. However, the linearity assumption between input features and predicted value introduces a large bias error for most domains. That is why most studies are directed to nonlinear and nonparametric techniques for the regression problem.

Data Cleansing: This is the process of ensuring that all values in a data set are consistent and correctly recorded.

Nearest Neighbor: It is a technique that predicts the value of each record in a data set based on a combination of the values of the k record(s) most similar to it.

Regression Analysis: It is a technique that examines the relation of a dependent variable to specified independent variables.

Artificial Neural Networks: They are nonlinear predictive models that learn through training and resemble biological neural networks in structure.

Rule Induction: It is the extraction of useful if-then rules from data based on statistical significance.

Predictive Model: A predictive model is a structure and process for predicting the values of specified variables in a data set.

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved