Dynamic Ridge Polynomial Higher Order Neural Network

Dynamic Ridge Polynomial Higher Order Neural Network

Rozaida Ghazali (Universiti Tun Hussein Onn, Malaysia), Abir Hussain (Liverpool John Moores University, UK) and Nazri Mohd Nawi (Universiti Tun Hussein Onn, Malaysia)
DOI: 10.4018/978-1-61520-711-4.ch011
OnDemand PDF Download:
List Price: $37.50
10% Discount:-$3.75


This chapter proposes a novel Dynamic Ridge Polynomial Higher Order Neural Network (DRPHONN). The architecture of the new DRPHONN incorporates recurrent links into the structure of the ordinary Ridge Polynomial Higher Order Neural Network (RPHONN) (Shin & Ghosh, 1995). RPHONN is a type of feedforward Higher Order Neural Network (HONN) (Giles & Maxwell, 1987) which implements a static mapping of the input vectors. In order to model dynamical functions of the brain, it is essential to utilize a system that is capable of storing internal states and can implement complex dynamic system. Neural networks with recurrent connections are dynamical systems with temporal state representations. The dynamic structure approach has been successfully used for solving varieties of problems, such as time series forecasting (Zhang & Chan, 2000; Steil, 2006), approximating a dynamical system (Kimura & Nakano, 2000), forecasting a stream flow (Chang et al, 2004), and system control (Reyes et al, 2000). Motivated by the ability of recurrent dynamic systems in real world applications, the proposed DRPHONN architecture is presented in this chapter.
Chapter Preview

The Properties And Network Structure Of Drphonn

In linear system, the use of past inputs values creates the Moving Average (MA) models. Meanwhile, the use of the past outputs values creates what is known as the Autoregressive (AR) models. Feedforward neural networks were shown to be a special case of Nonlinear Autoregressive (NAR) models, on the other hand Recurrent Neural Networks (RNNs) were shown to be a special case of Nonlinear ARMA models (NARMA). This means that RNNs have moving average components, therefore showing advantages over feedforward neural networks, similar to the advantages in which ARMA model posses over AR model (Connor et al., 1994). Hence, RNNs are well suited for time series that posses moving average components (Connor et al., 1994).

Most of real world applications require explicit treatment of dynamics. Feedforward RPHONN can only accommodate dynamic systems by including past inputs and target values in an augmented set of inputs. However, this kind of dynamic representation does not exploit a known feature of biological networks, that of internal feedback. DRPHONN, on the other hand, incorporates a recurrent connection, and as a consequence of this feedback, the network outputs depend not only on the initial values of external inputs, but also on the entire history of the system inputs. Hence, the introduction of recurrence feedback in the ordinary feedforward RPHONN is expected to improve the input-output mapping. This relates to the fact that the proposed DRPHONN has the capability of having a memory to solve the underlying task and exhibiting a rich dynamic behaviour.

The structure of the DRPHONN is constructed from a number of increasing order of Pi-Sigma units (refer to Figure 1) (Shin & Ghosh, 1991) with the addition of a feedback connection from the output layer to the input layer. The feedback connection feeds the activation of the output node to the summing nodes in each Pi-Sigma units, thus allowing each building block of Pi-Sigma unit to see the resulting output of the previous patterns. In contrast to RPHONN, the proposed DRPHONN, as shown in Figure 2 is provided with memories which give the network the ability of retaining information to be used later. All the connection weights from the input layer to the first summing layer are learnable, while the rest are fixed to unity.

Complete Chapter List

Search this Book: