Ensemble Learning for Regression

Ensemble Learning for Regression

Niall Rooney
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-60566-010-3.ch120
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The concept of ensemble learning has its origins in research from the late 1980s/early 1990s into combining a number of artificial neural networks (ANNs) models for regression tasks. Ensemble learning is now a widely deployed and researched topic within the area of machine learning and data mining. Ensemble learning, as a general definition, refers to the concept of being able to apply more than one learning model to a particular machine learning problem using some method of integration. The desired goal of course is that the ensemble as a unit will outperform any of its individual members for the given learning task. Ensemble learning has been extended to cover other learning tasks such as classification (refer to Kuncheva, 2004 for a detailed overview of this area), online learning (Fern & Givan, 2003) and clustering (Strehl & Ghosh, 2003). The focus of this article is to review ensemble learning with respect to regression, where by regression, we refer to the supervised learning task of creating a model that relates a continuous output variable to a vector of input variables.
Chapter Preview
Top

Background

Ensemble learning consists of two issues that need to be addressed, ensemble generation: how does one generate the base models/members of the ensemble and how large should the ensemble size be and ensemble integration: how does one integrate the base models’ predictions to improve performance? Some ensemble schemes address these issues separately, others such as Bagging (Breiman, 1996a) and Boosting (Freund & Schapire, 1996) do not. The problem of ensemble generation where each base learning model uses the same learning algorithm (homogeneous learning) is generally addressed by a number of different techniques: using different samples of the training data or feature subsets for each base model or alternatively, if the learning method has a set of learning parameters, these may be adjusted to have different values for each of the base models. An alternative generation approach is to build the models from a set of different learning algorithms (heterogeneous learning). There has been less research in this latter area due to the increased complexity of effectively combining models derived from different algorithms. Ensemble integration can be addressed by either one of two mechanisms: either the predictions of the base models are combined in some fashion during the application phase to give an ensemble prediction (combination approach) or the prediction of one base model is selected according to some criteria to form the final prediction (selection approach). Both selection and combination can be either static in approach, where the learned model does not alter, or dynamic in approach, where the prediction strategy is adjusted for each test instance.

Complete Chapter List

Search this Book:
Reset