Tohru Nitta (AIST, Japan)

DOI: 10.4018/978-1-59904-849-9.ch055

Chapter Preview

TopThe usual real-valued artificial neural networks have been applied to various fields such as telecommunications, robotics, bioinformatics, image processing and speech recognition, in which complex numbers (two dimensions) are often used with the Fourier transformation. This indicates the usefulness of complex-valued neural networks whose input and output signals and parameters such as weights and thresholds are all complex numbers, which are an extension of the usual real-valued neural networks. In addition, in the human brain, an action potential may have different pulse patterns, and the distance between pulses may be different. This suggests that it is appropriate to introduce complex numbers representing phase and amplitude into neural networks.

Aizenberg, Ivaskiv, Pospelov and Hudiakov (1971) (former Soviet Union) proposed a complex-valued neuron model for the first time, and although it was only available in Russian literature, their work can now be read in English (Aizenberg, Aizenberg & Vandewalle, 2000). Prior to that time, most researchers other than Russians had assumed that the first persons to propose a complex-valued neuron were Widrow, McCool and Ball (1975). Interest in the field of neural networks started to grow around 1990, and various types of complex-valued neural network models were subsequently proposed. Since then, their characteristics have been researched, making it possible to solve some problems which could not be solved with the real-valued neuron, and to solve many complicated problems more simply and efficiently.

TopThe generic definition of a complex-valued neuron is as follows. The input signals, weights, thresholds and output signals are all complex numbers. The net input *U _{n}* to a complex-valued neuron

For example, the component-wise activation function or real-imaginary type activation function is often used (Nitta & Furuya, 1991; Benvenuto & Piazza, 1992; Nitta, 1997), which is defined as follows:

That is, the real and imaginary parts of an output of a neuron mean the sigmoid functions of the real part *x* and imaginary part *y* of the net input *z* to the neuron, respectively.

Regular Complex Function: A complex function that is complex-differentiable at every point.

Artificial Neural Network: A network composed of artificial neurons. Artificial neural networks can be trained to find nonlinear relationships in data.

Back-Propagation Algorithm: A supervised learning technique used for training neural networks, based on minimizing the error between the actual outputs and the desired outputs.

Identity Theorem: A theorem for regular complex functions: given two regular functions f and g on a connected open set D, if f = g on some neighborhood of z that is in D, then f = g on D.

Complex Number: A number of the form a + ib where a and b are real numbers, and i is the imaginary unit such that i2 = – 1. a is called the real part, and b the imaginary part.

Decision Boundary: A boundary which pattern classifiers such as the real-valued neural network use to classify input patterns into several classes. It generally consists of hypersurfaces.

Clifford Algebras: An associative algebra, which can be thought of as one of the possible generalizations of complex numbers and quaternions.

Quaternion: A four-dimensional number which is a non-commutative extension of complex numbers.

Search this Book:

Reset

Copyright © 1988-2019, IGI Global - All Rights Reserved