How to Train Multilayer Perceptrons Efficiently With Large Data Sets

How to Train Multilayer Perceptrons Efficiently With Large Data Sets

Hyeyoung Park
Copyright: © 2002 |Pages: 21
ISBN13: 9781930708266|ISBN10: 1930708262|EISBN13: 9781591400172
DOI: 10.4018/978-1-930708-26-6.ch010
Cite Chapter Cite Chapter

MLA

Park, Hyeyoung. "How to Train Multilayer Perceptrons Efficiently With Large Data Sets." Heuristic and Optimization for Knowledge Discovery, edited by Hussein A. Abbass, et al., IGI Global, 2002, pp. 186-206. https://doi.org/10.4018/978-1-930708-26-6.ch010

APA

Park, H. (2002). How to Train Multilayer Perceptrons Efficiently With Large Data Sets. In H. Abbass, C. Newton, & R. Sarker (Eds.), Heuristic and Optimization for Knowledge Discovery (pp. 186-206). IGI Global. https://doi.org/10.4018/978-1-930708-26-6.ch010

Chicago

Park, Hyeyoung. "How to Train Multilayer Perceptrons Efficiently With Large Data Sets." In Heuristic and Optimization for Knowledge Discovery, edited by Hussein A. Abbass, Charles S. Newton, and Ruhul Sarker, 186-206. Hershey, PA: IGI Global, 2002. https://doi.org/10.4018/978-1-930708-26-6.ch010

Export Reference

Mendeley
Favorite

Abstract

Feed forward neural networks or multilayer perceptrons have been successfully applied to a number of difficult and diverse applications by using the gradient descent learning method known as the error backpropagation algorithm. However, it is known that the backpropagation method is extremely slow in many cases mainly due to plateaus. In data mining, the data set is usually large and the slow learning speed of neural networks is a critical defect. In this chapter, we present an efficient on-line learning method called adaptive natural gradient learning. It can solve the plateau problems, and can be successfully applied to the learning associated with large data sets. We compare the presented method with various popular learning algorithms with the aim of improving the learning speed and discuss briefly the merits and defects of each method so that one can get some guidance as to the choice of the proper method for a given application. In addition, we also give a number of technical tips, which can be easily implemented with low computational cost and can sometimes make a remarkable improvement in the learning speed.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.