Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore

Lluís A. Belanche Muñoz (Universitat Politècnica de Catalunya, Spain)

DOI: 10.4018/978-1-59904-849-9.ch149

Chapter Preview

TopThe problem of learning an input/output relation from a set of examples can be regarded as the task of approximating an unknown function from a set of data points, which are possibly sparse. Concerning approximation by classical feed-forward ANN, these networks implement a parametric approximating function and have been shown to be able of representing generic classes of functions (as the continuous or integrable functions) to an arbitrary degree of accuracy. In general, there are three questions that arise when defining one such parameterized family of functions:

*1.*What is the most adequate parametric form for a given problem?

*2.*How to find the best parameters for the chosen form?

*3.*What classes of functions can be represented and how well?

The most typical problems in a ANN supervised learning process, besides the determination of the learning parameters themselves, include (Hertz, Krogh & Palmer, 1991), (Hinton, 1989), (Bishop, 1995):

*1.*The possibility of getting stuck in

*local optima*of the cost function, in which conventional non-linear optimization techniques will stay forever. The incorporation of a global scheme (like multiple restarts or an annealing schedule) is surely to increase the chance of finding a better solution, although the cost can become prohibitedly high. A feed-forward network has multiple equivalent solutions, created by weight permutations and sign flips. Every local minima in a network with a single hidden layer of*h*_{1}units has*s*(*h*_{1})*=h*_{1}*!*2^{h}_{1}equivalent solutions, so the chances of getting in the basin of attraction of one of them are reasonable high. The complexity of the error surface –especially in very high dimensions– makes the possibility of getting trapped a real one.*2.*Long

*training times*,*oscillations*and network*paralysis*. These are features highly related to the specific learning algorithm, and relate to bad or too general choices for the parameters of the optimization technique (such as the learning rate). The presence of saddle points—regions where the error surface is very flat—also provoke an extremely slow advance for extensive periods of time. The use of more advanced methods that dynamically set these and other parameters can alleviate the problem.*3.**Non-cumulative*learning. It is hard to take an already trained network and re-train it with additional data without losing previously learned knowledge.*4.*The

*curse of dimensionality*, roughly stated as the fact that the number of examples needed to represent a given function grows exponentially with the number of dimensions.*5.*Difficulty of finding a

*structure*in the training data, possibly caused by a very high dimension or a distorting pre-processing scheme.*6.*Bad

*generalization*, which can be due to several causes: the use of poor training data or attempts to extrapolate beyond them, an excessive number of hidden units, too long training processes or a badly chosen regularization. All of them can lead to an*overfitting*of the training data, in which the ANN adjusts the training set merely as an*interpolation*task.*7.*Not amenable to

*inspection*. It is generally arduous to interpret the knowledge learned, especially in large networks or with a high number of model inputs.

Evolutionary Algorithm: A computer simulation in which a population of individuals (abstract representations of candidate solutions to an optimization problem) are stochastically selected, recombined, mutated, and then removed or kept, based on their relative fitness to the problem.

Feed-Forward Artificial Neural Network: Artificial Neural Network whose graph has no cycles.

Learning Algorithm: Method or algorithm by virtue of which an Artificial Neural Network develops a representation of the information present in the learning examples, by modification of the weights.

Neuron Model: The computation of an artificial neuron, expressed as a function of its input and its weight vector and other local information.

Architecture: The number of artificial neurons, its arrangement and connectivity.

Weight: A free parameter of an Artificial Neural Network, that can be modified through the action of a Learning Algorithm to obtain desired responses to certain input stimuli.

Artificial Neural Network: Information processing structure without global or shared memory that takes the form of a directed graph where each of the computing elements (“neurons”) is a simple processor with internal and adjustable parameters, that operates only when all its incoming information is available.

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved