Mohammed Sadiq Al-Rawi (University of Aveiro, Portugal) and Kamal R. Al-Rawi (Petra University, Jordan)

Source Title: Artificial Higher Order Neural Networks for Computer Science and Engineering: Trends for Emerging Applications

Copyright: © 2010
|Pages: 21
DOI: 10.4018/978-1-61520-711-4.ch006

Chapter Preview

TopIn our daily life we face several classification problems that are considered nonlinear, i.e., one cannot separate two categories using simply a line for two dimensional patterns, a plane for three dimensional patters, or a hyper plane as in multi-dimensional patters. Inspired by the biological neuronal system, computational intelligent based classification systems were developed in the past few decades and are widely known as computational neural networks. These computational networks possess powerful nonlinear classification ability and they are also known in the literature with other proximate names, such as artificial neural networks, and statistical neural networks. In this work, we choose to call multi-layer feedforward neural networks as Ordinary Neural Networks (ONNs) in order to distinguish them from Higher Order Neural Networks (HONNs). The reason is that both HONNs and ONNs are artificial, computational, multi-layer feedforward neural networks. Nonetheless, other terminologies of ONNs might exist such as first order neural networks (Giles, et al., 1988), or multi-layer perceptrons (Minsky and Papert 1969),

In order to solve nonlinear classification problems, an ONN with one or more hidden layers can be employed. Determining the proper number of hidden layers and the number of units in each hidden layer is accomplished by trial and error, dynamic adaptive algorithms e.g. surgeon brain damage algorithms (Duda et al., 2000). Several studies have used HONNs rather than ONNs in order to obtain better performance (Thimm, 1998; Thimm & Fiesler, 1997; Spirkovska & Reid, 1993; Rovithakis et al., 2004). To what degree we can rely on these outperformance results? When we investigate the literature we see that ONNs have only Sigma (summation) activation units, for example, the output of a ONN is given by:

Thus, the major difference between HONNs and ONNs is the way the activation is calculated, i.e., only sigma units are used to construct ONNs, while Sigma and PI units or just PI units are used to construct HONNs. Does this matter? In computer architecture, a multiplication operation can be implemented via an algorithm implementing several addition operations (Knuth, 1997; Kulisch, 2002). In fact, multiplications are defined for the whole numbers in terms of repeated addition and even multiplications of real numbers could be defined by a systematic generalization of this basic idea. With this in mind, HONNs could be converted to a very complex, large size, constrained ONNs. The hypothetical large sized ONN that equiv a HONN might justify the power of a moderate size HONN. Nonetheless, it is unfair to compare the computational cost of some HONN to another ONN that has the same number of units and synaptic connections. More than that, it is also unfair to compare the expressive power of a HONN to an ONN when they have the same number of units and synaptic connections. The reason is that the computational architecture and computational complexity of a HONN is much higher than that of an ONN. To overcome this dilemma it is necessary to develop a mathematical model for converting a HONN to its equivalent ONN and further studies can be performed later to answer questions about the expressive power and the computational complexity of both architectures.

Search this Book:

Reset