Fundamentals of Higher Order Neural Networks for Modeling and Simulation

Fundamentals of Higher Order Neural Networks for Modeling and Simulation

Madan M. Gupta (University of Saskatchewan, Canada), Ivo Bukovsky (Czech Technical University in Prague, Czech Republic), Noriyasu Homma (Tohoku University, Japan), Ashu M. G. Solo (Maverick Technologies America Inc., USA) and Zeng-Guang Hou (The Chinese Academy of Sciences, China)
DOI: 10.4018/978-1-4666-2175-6.ch006
OnDemand PDF Download:
No Current Special Offers


In this chapter, the authors provide fundamental principles of Higher Order Neural Units (HONUs) and Higher Order Neural Networks (HONNs) for modeling and simulation. An essential core of HONNs can be found in higher order weighted combinations or correlations between the input variables and HONU. Except for the high quality of nonlinear approximation of static HONUs, the capability of dynamic HONUs for the modeling of dynamic systems is shown and compared to conventional recurrent neural networks when a practical learning algorithm is used. In addition, the potential of continuous dynamic HONUs to approximate high dynamic order systems is discussed, as adaptable time delays can be implemented. By using some typical examples, this chapter describes how and why higher order combinations or correlations can be effective for modeling of systems.
Chapter Preview

1. Introduction

The human brain has more than 10 billion neurons, which have complicated interconnections, and these neurons constitute a large-scale signal processing and memory network. The mathematical study of a single neural model and its various extensions is the first step in the design of a complex neural network for solving a variety of problems in the fields of signal processing, pattern recognition, control of complex processes, neurovision systems, and other decision making processes. Neural network solutions for these problems can be directly used for computer science and engineering applications.

A simple neural model is presented in Figure 1. In terms of information processing, an individual neuron with dendrites as multiple-input terminals and an axon as a single-output terminal may be considered a Multiple-Input/Single-Output (MISO) system. The processing functions of this MISO neural processor may be divided into the following four categories:

Figure 1.

A simple neural model as a multiple-input (dendrites) and single-output (axon) processor

  • 1.

    Dendrites: They consist of a highly branching tree of fibers and act as input points to the main body of the neuron. On average, there are 103 to 104 dendrites per neuron, which form receptive surfaces for input signals to the neurons.

  • 2.

    Synapse: It is a storage area of past experience (knowledge base). It provides Long-Term Memory (LTM) to the past accumulated experience. It receives information from sensors and other neurons and provides outputs through the axons.

  • 3.

    Soma: The neural cell body is called the soma. It is the large, round central neuronal body. It receives synaptic information and performs further processing of the information. Almost all logical functions of the neuron are carried out in the soma.

  • 4.

    Axon: The neural output line is called the axon. The output appears in the form of an action potential that is transmitted to other neurons for further processing.

The electrochemical activities at the synaptic junctions of neurons exhibit a complex behavior because each neuron makes hundreds of interconnections with other neurons. Each neuron acts as a parallel processor because it receives action potentials in parallel from the neighboring neurons and then transmits pulses in parallel to other neighboring synapses. In terms of information processing, the synapse also performs a crude pulse frequency-to-voltage conversion as shown in Figure 1.

Complete Chapter List

Search this Book: