On Simulation Performance of Feedforward and NARX Networks Under Different Numerical Training Algorithms

On Simulation Performance of Feedforward and NARX Networks Under Different Numerical Training Algorithms

Salim Lahmiri
DOI: 10.4018/978-1-4666-8823-0.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter focuses on comparing the forecasting ability of the backpropagation neural network (BPNN) and the nonlinear autoregressive moving average with exogenous inputs (NARX) network trained with different algorithms; namely the quasi-Newton (Broyden-Fletcher-Goldfarb-Shanno, BFGS), conjugate gradient (Fletcher-Reeves update, Polak-Ribiére update, Powell-Beale restart), and Levenberg-Marquardt algorithm. Three synthetic signals are generated to conduct experiments. The simulation results showed that in general the NARX which is a dynamic system outperforms the popular BPNN. In addition, conjugate gradient algorithms provide better prediction accuracy than the Levenberg-Marquardt algorithm widely used in the literature in modeling exponential signal. However, the LM performed the best when used for forecasting the Moroccan and South African stock price indices under both the BPNN and NARX systems.
Chapter Preview
Top

Introduction

Artificial neural networks are adaptive nonlinear systems capable to approximate any function. Theoretically, a neural network can approximate a continuous function to an arbitrary accuracy on any compact set (Funahashi, 1989; Hornik, 1991; Cybenko, 1989). The backpropagation (BP) algorithm that was introduced by Rumelhart (1986) is the well-known method for training a multilayer feed-forward artificial neural networks. It adopts the gradient descent algorithm. In the basic BP algorithm the weights are adjusted in the steepest descent direction (negative of the gradient). However, the backpropagation neural network (BPNN) has a slow learning convergent velocity and may be trapped in local minima. In addition, the performance of the BPNN depends on the learning rate parameter and the complexity of the problem to be modelled. Indeed, the selection of the learning parameter affects the convergence of the BPNN and is usually determined by experience. Many faster algorithms were proposed to speed up the convergence of the BPNN. They fall into two main categories. The first category uses heuristic techniques developed from an analysis of the performance of the standard steepest descent algorithm. The second category uses standard numerical optimization techniques. The first category includes the gradient descent with adaptive learning rate, gradient descent with momentum, gradient descent with momentum and adaptive learning rate, and the resilient algorithm. In the standard steepest descent, the learning rate is fixed and its optimal value is always hard to find. The heuristic techniques allow the optimal learning rate to adaptively change during the training process as the algorithm moves across the performance surface. Therefore, the performance could be improved. The second category includes conjugate gradient, quasi-Newton, and Levenberg-Marquardt (LM) algorithm. In the conjugate gradient algorithms, a search is performed along conjugate directions; therefore the convergence is faster than steepest descent directions. Quasi-Netwon method often converges faster than conjugate gradient methods since it does not require calculation of second derivatives. For instance, it updates an approximate Hessian matrix at each iteration. Finally, The L-M method combines the best features of the Gauss-Newton technique and the steepest-descent method. It also converges faster than conjugate gradient methods since the Hessian Matrix is not computed but only approximated. For instance, it uses the Jacobian that requires less computation than the Hessian matrix.

Key Terms in this Chapter

Learning Algorithm: A learning algorithm is a method used to process data to extract patterns appropriate for application in a new situation. In particular, the goal is to adapt a system to a specific input-output transformation task.

Artificial Neural Network: An artificial neural network is an information processing system which is inspired by the human nervous system for information processing. It can be trained to simulate a number of outputs in response to provided inputs.

Forecasting: In its simple form, it is a task related to providing future estimated value of a given variable based on past and current information.

Backpropagation algorithm: It is a training algorithm for artificial neural network based on calculation of the gradient of a loss function to optimize weights in neural network.

NARX Network: It is a recurrent dynamic network with feedback connections enclosing several layers of the network. It is suitable for time series modeling.

Stock Market: It is a particular market where stocks and bonds are issued by companies and are publically traded.

Complete Chapter List

Search this Book:
Reset