Higher Order Neural Network Architecture
HONNs were first introduced by Giles and Maxwell (1987) and further analyzed by Pao (1989) who referred to them as ‘tensor networks’ and regarded them as a special case of his functional-link models. HONNs have already enjoyed some success in the field of pattern recognition, as with Giles and Maxwell (1987) and Schmidt and Davis (1993) and associative recall as with Karayiannis (1995), but their application to financial time series prediction has just started with contributions such as Dunis et al. (2006-a,b,c). The typical structure of a HONN is given in Figure 1b.
(a) Left, MLP with three inputs and two hidden nodes. (b) Right, Second Order HONN with three inputs
HONNs use joint activations between inputs, thus removing the task of establishing relationships between them during training. For this reason, a hidden layer is commonly not used. The reduced number of free weights compared with MLPs means that the problems of overfitting and local optima can be migrated to a large degree. In addition, as noted by Pao (1989), a HONN is faster to train and execute when compared to a MLP. It is, however, necessary for practical reasons to limit both the order of the network and the number of inputs, to avoid the curse of dimensionality.