Higher Order Neural Networks with Bayesian Confidence Measure for the Prediction of the EUR/USD Exchange Rate

Higher Order Neural Networks with Bayesian Confidence Measure for the Prediction of the EUR/USD Exchange Rate

Adam Knowles, Abir Hussain, Wael El Deredy, Paulo G.J. Lisboa, Christian L. Dunis
Copyright: © 2009 |Pages: 12
DOI: 10.4018/978-1-59904-897-0.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multi-Layer Perceptrons (MLP) are the most common type of neural network in use, and their ability to perform complex nonlinear mappings and tolerance to noise in data is well documented. However, MLPs also suffer long training times and often reach only local optima. Another type of network is Higher Order Neural Networks (HONN). These can be considered a ‘stripped-down’ version of MLPs, where joint activation terms are used, relieving the network of the task of learning the relationships between the inputs. The predictive performance of the network is tested with the EUR/USD exchange rate and evaluated using standard financial criteria including the annualized return on investment, showing a 8% increase in the return compared with the MLP. The output of the networks that give the highest annualized return in each category was subjected to a Bayesian based confidence measure. This performance improvement may be explained by the explicit and parsimonious representation of high order terms in Higher Order Neural Networks, which combines robustness against noise typical of distributed models, together with the ability to accurately model higher order interactions for long-term forecasting. The effectiveness of the confidence measure is explained by examining the distribution of each network’s output. We speculate that the distribution can be taken into account during training, thus enabling us to produce neural networks with the properties to take advantage of the confidence measure.
Chapter Preview
Top

Methods And Models

Higher Order Neural Network Architecture

HONNs were first introduced by Giles and Maxwell (1987) and further analyzed by Pao (1989) who referred to them as ‘tensor networks’ and regarded them as a special case of his functional-link models. HONNs have already enjoyed some success in the field of pattern recognition, as with Giles and Maxwell (1987) and Schmidt and Davis (1993) and associative recall as with Karayiannis (1995), but their application to financial time series prediction has just started with contributions such as Dunis et al. (2006-a,b,c). The typical structure of a HONN is given in Figure 1b.

Figure 1.

(a) Left, MLP with three inputs and two hidden nodes. (b) Right, Second Order HONN with three inputs

978-1-59904-897-0.ch002.f01

HONNs use joint activations between inputs, thus removing the task of establishing relationships between them during training. For this reason, a hidden layer is commonly not used. The reduced number of free weights compared with MLPs means that the problems of overfitting and local optima can be migrated to a large degree. In addition, as noted by Pao (1989), a HONN is faster to train and execute when compared to a MLP. It is, however, necessary for practical reasons to limit both the order of the network and the number of inputs, to avoid the curse of dimensionality.

Complete Chapter List

Search this Book:
Reset