Neural Network Based Classifier Ensembles: A Comparative Analysis

Neural Network Based Classifier Ensembles: A Comparative Analysis

B. Verma
DOI: 10.4018/978-1-4666-1833-6.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter presents the state of the art in classifier ensembles and their comparative performance analysis. The main aim and focus of this chapter is to present and compare the author’s recently developed neural network based classifier ensembles. The three types of neural classifier ensembles are considered and discussed. The first type is a classifier ensemble that uses a neural network for all its base classifiers. The second type is a classifier ensemble that uses a neural network as one of the classifiers among many of its base classifiers. The third and final type is a classifier ensemble that uses a neural network as a fusion classifier. The chapter reviews recent neural network based ensemble classifiers and compares their performances with other machine learning based classifier ensembles such as bagging, boosting, and rotation forest. The comparison is conducted on selected benchmark datasets from UCI machine learning repository.
Chapter Preview
Top

Introduction

Classifier ensembles, also known as fusion of classifiers, hybrid systems, mixture of experts, multiple classifier systems, combination of multiple classifiers and committee of classifiers, are approaches which train multiple classifiers and fuse their decisions to produce the final decision. The recent research results show that classifier ensembles can produce better classification accuracy than an individual classifier and they can be powerful tools for solving many real world problems.

The history of first classifier ensemble dates back to 1979 (Polikar, 2006) although in early days it was not called a classifier ensemble. According to a review paper (Polikar, 2006), Dasarathy and Sheela (1979) proposed a system that used two or more classifiers. The history of first neural network based classifier ensemble dates back to 1990. Hansen and Salamon (1990) showed that the generalization performance of a neural network can be improved using an ensemble of similarly configured neural networks.

The research in classifier ensembles have significantly grown in the past two decades and many classifier ensemble techniques have been applied to solve a number of real world problems (Aviden, 2007, Ma et al., 2007, Wei et al., 2010, Takemura et al., 2010, Su et al., 2009, Silva, 2010, and Kuncheva, et al., 2010) particularly by computational intelligence research community. Many new classifier ensembles have been proposed and some promising results have been published in the literature. Windeatt (2006) proposed and investigated a Multilayer Perceptron based classifier ensemble. A new measure is described that can predict the number of training epochs for achieving optimal performance in an ensemble of MLP classifiers. The measure is computed between pairs of patterns on the training data and it is based on a spectral representation of a boolean function. This representation characterizes the mapping from classifier decisions to target label and allows accuracy and diversity to be incorporated within a single measure. Rodriguez et al. (2006) proposed a classifier ensemble based on rotation forest. The base classifiers are trained by randomly splitting feature data into K subsets and principal component analysis is applied to each subset. They compared the results with other existing ensembles and showed a significant improvement in accuracy. Maclin et al. (1995) proposed a neural network based ensemble classifier where different network weights are used to initialise the base neural network’s learning process in order to diversify the base classifiers. Yamaguchi et al. (2009) proposed an ensemble approach which used neural networks with different initial weights to classify land surface images obtained from the sensors. The approach achieved better generalisation in comparison to other existing approaches.

Complete Chapter List

Search this Book:
Reset