Multi-Objective Training of Neural Networks

Multi-Objective Training of Neural Networks

M. P. Cuéllar, Miguel Delgado, M. C. Pegalajar
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-59904-849-9.ch168
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Traditionally, the application of a neural network (Haykin, 1999) to solve a problem has required to follow some steps before to obtain the desired network. Some of these steps are the data preprocessing, model selection, topology optimization and then the training. It is usual to spend a large amount of computational time and human interaction to perform each task of before and, particularly, in the topology optimization and network training. There have been many proposals to reduce the effort necessary to do these tasks and to provide the experts with a robust methodology. For example, Giles et al. (1995) provides a constructive method to optimize iteratively the topology of a recurrent network. Other methods attempt to reduce the complexity of the network structure by mean of removing unnecessary network nodes and connections like in (Morse, 1994). In the last years, evolutionary algorithms have been shown as promising tools to solve this problem, existing many competitive approaches in the literature. For example, Blanco et al. (2001) proposed a master-slave genetic algorithm to train (master algorithm) and to optimize the size of the network (slave algorithm). For a general view of the problem and the use of evolutionary algorithms for neural network training and optimization, we refer the reader to (Yao, 1999). Although the literature about genetic algorithms and neural networks is very extensive, we would like to remark the recent popularity of multi-objective optimization (Coello et al., 2002, Jin, 2006), specially to solve the problem of simultaneous training and topology optimization of neural networks. These methods have shown to perform suitably for this task in previous works, although most of them are proposed for feedforward models. They attempt to optimize the structure of the network (number of connections, hidden units or layers), while training the network at the same time. Multi-objective algorithms may provide important advantages in the simultaneous training and optimization of neural networks: They may force the search to return a set of optimal networks instead of a single one; they are capable to speed-up the optimization process; they may be preferred to a weight-aggregation procedure to cover the regularization problem in neural networks; and they are more suitable when the designer would like to combine different error measures for the training. A recent review of these techniques may be found in (Jin, 2006).
Chapter Preview
Top

Background

Multi-objective algorithms have become popular in the last years to solve the problem of the simultaneous training and topology optimization of neural networks, because of the innovations they can provide to solve it. Certain authors have addressed this problem through the evolution of single ensembles as for example with DIVACE-II (Chandra et al., 2006), which also implements different levels of coevolution. In other works, the networks are fully evolved and the evolutionary operators are designed to deal with both training and structure optimization. Some authors have addressed the problem of the structure optimization attending to reduce either the number of network neurons or either the number of network connections. In the first methods (Abbass et al., 2001; Delgado et al., 2005; González et al., 2003), the optimization is easier since the codification of a network contains a smaller number of freedom degrees than the last methods; however, they have a disadvantage in the sense that the networks obtained are fully connected. On the other hand, the methods in the second place (Jin et al., 2004; Cuéllar et al, 2007) attempt to reduce the number of connections but it is not ensured that also the number of network nodes is also minimum. Nevertheless, experimental results have shown that the networks obtained with these proposals have a low size (Jin et al, 2004).

Key Terms in this Chapter

Multi-Objective Optimization: Optimization of a problem that involves the satisfacibility or optimization of two or more objectives, sometimes opposed each other.

Dynamical Recurrent Neural Networks: Artificial neural network that include recurrent connections in the network structure. They are capable to process patterns with undetermined size and/or indexed in time. The output in these networks at time t+1 are computed using the network inputs at time t and the network state, provided by the recurrent connections.

Time-Series Prediction: Problem that involves the prediction of the future values of a time series, considering a few values from the data set in the past.

Evolutionary Algorithm: Optimization algorithm based on Darwinian nature evolution.

Regularization: Optimization of both complexity and performance of a neural network following a linear aggregation or a multi-objective algorithm.

Ensembles: Self-containing area of a neural network (neuron, connection, set of a neuron with connections...) that, being combined with other ensembles, is able to build a neural network that solves a problem.

Time-Series: Data sequence indexed in time.

Complete Chapter List

Search this Book:
Reset