Supervised Artificial Neural Networks (ANN) are information processing systems that adapt their functionality as a result of exposure to input-output examples. To this end, there exist generic procedures and techniques, known as learning rules. The most widely used in the neural network context rely in derivative information, and are typically associated with the Multilayer Perceptron (MLP). Other kinds of supervised ANN have developed their own techniques. Such is the case of Radial Basis Function (RBF) networks (Poggio & Girosi, 1989). There has been also considerable work on the development of adhoc learning methods based on evolutionary algorithms.
The problem of learning an input/output relation from a set of examples can be regarded as the task of approximating an unknown function from a set of data points, which are possibly sparse. Concerning approximation by classical feed-forward ANN, these networks implement a parametric approximating function and have been shown to be able of representing generic classes of functions (as the continuous or integrable functions) to an arbitrary degree of accuracy. In general, there are three questions that arise when defining one such parameterized family of functions:
What is the most adequate parametric form for a given problem?
How to find the best parameters for the chosen form?
What classes of functions can be represented and how well?
The most typical problems in a ANN supervised learning process, besides the determination of the learning parameters themselves, include (Hertz, Krogh & Palmer, 1991), (Hinton, 1989), (Bishop, 1995):
The possibility of getting stuck in local optima of the cost function, in which conventional non-linear optimization techniques will stay forever. The incorporation of a global scheme (like multiple restarts or an annealing schedule) is surely to increase the chance of finding a better solution, although the cost can become prohibitedly high. A feed-forward network has multiple equivalent solutions, created by weight permutations and sign flips. Every local minima in a network with a single hidden layer of h1 units has s(h1)=h1!2h1 equivalent solutions, so the chances of getting in the basin of attraction of one of them are reasonable high. The complexity of the error surface –especially in very high dimensions– makes the possibility of getting trapped a real one.
Long training times, oscillations and network paralysis. These are features highly related to the specific learning algorithm, and relate to bad or too general choices for the parameters of the optimization technique (such as the learning rate). The presence of saddle points—regions where the error surface is very flat—also provoke an extremely slow advance for extensive periods of time. The use of more advanced methods that dynamically set these and other parameters can alleviate the problem.
Non-cumulative learning. It is hard to take an already trained network and re-train it with additional data without losing previously learned knowledge.
The curse of dimensionality, roughly stated as the fact that the number of examples needed to represent a given function grows exponentially with the number of dimensions.
Difficulty of finding a structure in the training data, possibly caused by a very high dimension or a distorting pre-processing scheme.
Bad generalization, which can be due to several causes: the use of poor training data or attempts to extrapolate beyond them, an excessive number of hidden units, too long training processes or a badly chosen regularization. All of them can lead to an overfitting of the training data, in which the ANN adjusts the training set merely as an interpolation task.
Not amenable to inspection. It is generally arduous to interpret the knowledge learned, especially in large networks or with a high number of model inputs.
Key Terms in this Chapter
Evolutionary Algorithm: A computer simulation in which a population of individuals (abstract representations of candidate solutions to an optimization problem) are stochastically selected, recombined, mutated, and then removed or kept, based on their relative fitness to the problem.
Feed-Forward Artificial Neural Network: Artificial Neural Network whose graph has no cycles.
Learning Algorithm: Method or algorithm by virtue of which an Artificial Neural Network develops a representation of the information present in the learning examples, by modification of the weights.
Neuron Model: The computation of an artificial neuron, expressed as a function of its input and its weight vector and other local information.
Architecture: The number of artificial neurons, its arrangement and connectivity.
Weight: A free parameter of an Artificial Neural Network, that can be modified through the action of a Learning Algorithm to obtain desired responses to certain input stimuli.
Artificial Neural Network: Information processing structure without global or shared memory that takes the form of a directed graph where each of the computing elements (“neurons”) is a simple processor with internal and adjustable parameters, that operates only when all its incoming information is available.