Complex-Valued Neural Networks

Complex-Valued Neural Networks

Tohru Nitta (AIST, Japan)
Copyright: © 2009 |Pages: 6
DOI: 10.4018/978-1-59904-849-9.ch055
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

The usual real-valued artificial neural networks have been applied to various fields such as telecommunications, robotics, bioinformatics, image processing and speech recognition, in which complex numbers (two dimensions) are often used with the Fourier transformation. This indicates the usefulness of complex-valued neural networks whose input and output signals and parameters such as weights and thresholds are all complex numbers, which are an extension of the usual real-valued neural networks. In addition, in the human brain, an action potential may have different pulse patterns, and the distance between pulses may be different. This suggests that it is appropriate to introduce complex numbers representing phase and amplitude into neural networks. Aizenberg, Ivaskiv, Pospelov and Hudiakov (1971) (former Soviet Union) proposed a complex-valued neuron model for the first time, and although it was only available in Russian literature, their work can now be read in English (Aizenberg, Aizenberg & Vandewalle, 2000). Prior to that time, most researchers other than Russians had assumed that the first persons to propose a complex-valued neuron were Widrow, McCool and Ball (1975). Interest in the field of neural networks started to grow around 1990, and various types of complex- valued neural network models were subsequently proposed. Since then, their characteristics have been researched, making it possible to solve some problems which could not be solved with the real-valued neuron, and to solve many complicated problems more simply and efficiently.
Chapter Preview
Top

Introduction

The usual real-valued artificial neural networks have been applied to various fields such as telecommunications, robotics, bioinformatics, image processing and speech recognition, in which complex numbers (two dimensions) are often used with the Fourier transformation. This indicates the usefulness of complex-valued neural networks whose input and output signals and parameters such as weights and thresholds are all complex numbers, which are an extension of the usual real-valued neural networks. In addition, in the human brain, an action potential may have different pulse patterns, and the distance between pulses may be different. This suggests that it is appropriate to introduce complex numbers representing phase and amplitude into neural networks.

Aizenberg, Ivaskiv, Pospelov and Hudiakov (1971) (former Soviet Union) proposed a complex-valued neuron model for the first time, and although it was only available in Russian literature, their work can now be read in English (Aizenberg, Aizenberg & Vandewalle, 2000). Prior to that time, most researchers other than Russians had assumed that the first persons to propose a complex-valued neuron were Widrow, McCool and Ball (1975). Interest in the field of neural networks started to grow around 1990, and various types of complex-valued neural network models were subsequently proposed. Since then, their characteristics have been researched, making it possible to solve some problems which could not be solved with the real-valued neuron, and to solve many complicated problems more simply and efficiently.

Top

Background

The generic definition of a complex-valued neuron is as follows. The input signals, weights, thresholds and output signals are all complex numbers. The net input Un to a complex-valued neuron n is defined as:

(1) where Wnm is the complex-valued weight connecting complex-valued neurons n and m, Xm is the complex-valued input signal from the complex-valued neuron m, and Vn is the complex-valued threshold of the neuron n. The output value of the neuron n is given by fC (Un) where fC: CC is called activation function (C denotes the set of complex numbers). Various types of activation functions used in the complex-valued neuron have been proposed, which influence the properties of the complex-valued neuron, and a complex-valued neural network consists of such complex-valued neurons.

For example, the component-wise activation function or real-imaginary type activation function is often used (Nitta & Furuya, 1991; Benvenuto & Piazza, 1992; Nitta, 1997), which is defined as follows:

(2) where fR(u) = 1/(1+exp(-u)), uR (R denotes the set of real numbers), i denotes , and the net input Un is converted into its real and imaginary parts as follows:

(3)

That is, the real and imaginary parts of an output of a neuron mean the sigmoid functions of the real part x and imaginary part y of the net input z to the neuron, respectively.

Key Terms in this Chapter

Regular Complex Function: A complex function that is complex-differentiable at every point.

Artificial Neural Network: A network composed of artificial neurons. Artificial neural networks can be trained to find nonlinear relationships in data.

Back-Propagation Algorithm: A supervised learning technique used for training neural networks, based on minimizing the error between the actual outputs and the desired outputs.

Identity Theorem: A theorem for regular complex functions: given two regular functions f and g on a connected open set D, if f = g on some neighborhood of z that is in D, then f = g on D.

Complex Number: A number of the form a + ib where a and b are real numbers, and i is the imaginary unit such that i2 = – 1. a is called the real part, and b the imaginary part.

Decision Boundary: A boundary which pattern classifiers such as the real-valued neural network use to classify input patterns into several classes. It generally consists of hypersurfaces.

Clifford Algebras: An associative algebra, which can be thought of as one of the possible generalizations of complex numbers and quaternions.

Quaternion: A four-dimensional number which is a non-commutative extension of complex numbers.

Complete Chapter List

Search this Book:
Reset