Learning in Feed-Forward Artificial Neural Networks I

Learning in Feed-Forward Artificial Neural Networks I

Lluís A. Belanche Muñoz
Copyright: © 2009 |Pages: 8
ISBN13: 9781599048499|ISBN10: 1599048493|EISBN13: 9781599048505
DOI: 10.4018/978-1-59904-849-9.ch148
Cite Chapter Cite Chapter

MLA

Belanche Muñoz, Lluís A. "Learning in Feed-Forward Artificial Neural Networks I." Encyclopedia of Artificial Intelligence, edited by Juan Ramón Rabuñal Dopico, et al., IGI Global, 2009, pp. 1004-1011. https://doi.org/10.4018/978-1-59904-849-9.ch148

APA

Belanche Muñoz, L. A. (2009). Learning in Feed-Forward Artificial Neural Networks I. In J. Rabuñal Dopico, J. Dorado, & A. Pazos (Eds.), Encyclopedia of Artificial Intelligence (pp. 1004-1011). IGI Global. https://doi.org/10.4018/978-1-59904-849-9.ch148

Chicago

Belanche Muñoz, Lluís A. "Learning in Feed-Forward Artificial Neural Networks I." In Encyclopedia of Artificial Intelligence, edited by Juan Ramón Rabuñal Dopico, Julian Dorado, and Alejandro Pazos, 1004-1011. Hershey, PA: IGI Global, 2009. https://doi.org/10.4018/978-1-59904-849-9.ch148

Export Reference

Mendeley
Favorite

Abstract

The view of artificial neural networks as adaptive systems has lead to the development of ad-hoc generic procedures known as learning rules. The first of these is the Perceptron Rule (Rosenblatt, 1962), useful for single layer feed-forward networks and linearly separable problems. Its simplicity and beauty, and the existence of a convergence theorem made it a basic departure point in neural learning algorithms. This algorithm is a particular case of the Widrow-Hoff or delta rule (Widrow & Hoff, 1960), applicable to continuous networks with no hidden layers with an error function that is quadratic in the parameters.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.