How to Train Multilayer Perceptrons Efficiently With Large Data Sets

How to Train Multilayer Perceptrons Efficiently With Large Data Sets

Hyeyoung Park (Brain Science Institute, Japan)
Copyright: © 2002 |Pages: 21
DOI: 10.4018/978-1-930708-26-6.ch010
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Feed forward neural networks or multilayer perceptrons have been successfully applied to a number of difficult and diverse applications by using the gradient descent learning method known as the error backpropagation algorithm. However, it is known that the backpropagation method is extremely slow in many cases mainly due to plateaus. In data mining, the data set is usually large and the slow learning speed of neural networks is a critical defect. In this chapter, we present an efficient on-line learning method called adaptive natural gradient learning. It can solve the plateau problems, and can be successfully applied to the learning associated with large data sets. We compare the presented method with various popular learning algorithms with the aim of improving the learning speed and discuss briefly the merits and defects of each method so that one can get some guidance as to the choice of the proper method for a given application. In addition, we also give a number of technical tips, which can be easily implemented with low computational cost and can sometimes make a remarkable improvement in the learning speed.

Complete Chapter List

Search this Book:
Reset