Neural Networks and Equilibria, Synchronization, and Time Lags

Neural Networks and Equilibria, Synchronization, and Time Lags

Daniela Danciu, Vladimir Rasvan
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-59904-849-9.ch178
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

All neural networks, both natural and artificial, are characterized by two kinds of dynamics. The first one is concerned with what we would call “learning dynamics”, in fact the sequential (discrete time) dynamics of the choice of synaptic weights. The second one is the intrinsic dynamics of the neural network viewed as a dynamical system after the weights have been established via learning. Regarding the second dynamics, the emergent computational capabilities of a recurrent neural network can be achieved provided it has many equilibria. The network task is achieved provided it approaches these equilibria. But the dynamical system has a dynamics induced a posteriori by the learning process that had established the synaptic weights. It is not compulsory that this a posteriori dynamics should have the required properties, hence they have to be checked separately. The standard stability properties (Lyapunov, asymptotic and exponential stability) are defined for a single equilibrium. Their counterpart for several equilibria are: mutability, global asymptotics, gradient behavior. For the definitions of these general concepts the reader is sent to Gelig et. al., (1978), Leonov et. al., (1992). In the last decades, the number of recurrent neural networks’ applications increased, they being designed for classification, identification and complex image, visual and spatio-temporal processing in fields as engineering, chemistry, biology and medicine (see, for instance: Fortuna et. al., 2001; Fink, 2004; Atencia et. al., 2004; Iwahori et. al., 2005; Maurer et. al., 2005; Guirguis & Ghoneimy, 2007). All these applications are mainly based on the existence of several equilibria for such networks, requiring them the “good behavior” properties above discussed. Another aspect of the qualitative analysis is the so-called synchronization problem, when an external stimulus, in most cases periodic or almost periodic has to be tracked (Gelig, 1982; Danciu, 2002). This problem is, from the mathematical point of view, nothing more but existence, uniqueness and global stability of forced oscillations.
Chapter Preview
Top

Introduction

All neural networks, both natural and artificial, are characterized by two kinds of dynamics. The first one is concerned with what we would call “learning dynamics”, in fact the sequential (discrete time) dynamics of the choice of synaptic weights. The second one is the intrinsic dynamics of the neural network viewed as a dynamical system after the weights have been established via learning. Regarding the second dynamics, the emergent computational capabilities of a recurrent neural network can be achieved provided it has many equilibria. The network task is achieved provided it approaches these equilibria. But the dynamical system has a dynamics induced a posteriori by the learning process that had established the synaptic weights. It is not compulsory that this a posteriori dynamics should have the required properties, hence they have to be checked separately.

The standard stability properties (Lyapunov, asymptotic and exponential stability) are defined for a single equilibrium. Their counterpart for several equilibria are: mutability, global asymptotics, gradient behavior. For the definitions of these general concepts the reader is sent to Gelig et. al., (1978), Leonov et. al., (1992).

In the last decades, the number of recurrent neural networks’ applications increased, they being designed for classification, identification and complex image, visual and spatio-temporal processing in fields as engineering, chemistry, biology and medicine (see, for instance: Fortuna et. al., 2001; Fink, 2004; Atencia et. al., 2004; Iwahori et. al., 2005; Maurer et. al., 2005; Guirguis & Ghoneimy, 2007). All these applications are mainly based on the existence of several equilibria for such networks, requiring them the “good behavior” properties above discussed.

Another aspect of the qualitative analysis is the so-called synchronization problem, when an external stimulus, in most cases periodic or almost periodic has to be tracked (Gelig, 1982; Danciu, 2002). This problem is, from the mathematical point of view, nothing more but existence, uniqueness and global stability of forced oscillations.

In the last decades the neural networks dynamics models have been modified once more by introducing the transmission delays. The standard model of a Hopfield-type network with delay as considered in (Gopalsamy & He, 1994) is

978-1-59904-849-9.ch178.m01
(1)

The present paper aims to a general presentation, with both research and educational purposes, of the three topics mentioned previously.

Top

Background

Dynamical systems with several equilibria occur in such fields of science and technology as electrical machines, chemical reactions, economics, biology and, last but not least, neural networks.

For systems with several equilibria the usual local concepts of stability are not sufficient for an adequate description. The so-called “global phase portrait” may contain both stable and unstable equilibria: each of them may be characterized separately since stability is a local concept dealing with a specific trajectory. But global concepts are also required for a better system description and this is particularly true for the case of the neural networks. Indeed, the neural networks may be viewed as interconnections of simple computing elements whose computational capability is increased by interconnection (“emergent collective capacities” – to cite Hopfield). This is due to the nonlinear characteristics leading to the existence of several stable equilibria. The network achieves its computing goal if no self-sustained oscillations are present and it always achieves some steady-state (equilibrium) among a finite (while large) number of such states.

Key Terms in this Chapter

Lyapunov Function: State scalar function defined on the state space of a system in order to obtain some qualitative properties - stability of equilibria, oscillatory behavior etc. - using a single function instead of several i.e. system’s state trajectories. A Lyapunov function is usually positive definite and, along system’s trajectories, is at least nonincreasing. The definite sign condition may also be relaxed for the generalized Lyapunov functions in the LaSalle sense. The basic physical model for the Lyapunov function is system’s energy - a state function that is nonincreasing along the state trajectory being at the same time positive definite. The strength of the Lyapunov function is exactly its independence of the physical concepts since writing down the stored energy of a system is not an easy job except possibly such standard cases as mechanical systems or electrical circuits. The energy like concepts may be nevertheless inspiring when “guessing” a Lyapunov function. In the infinite dimensional cases e.g. time delay or propagation systems, the Lyapunov function is replaced by a Lyapunov functional defined on the infinite dimensional state space.

Recurrent Neural Network (RNN): Neural networks which display feedback interconnections among their units (neurons). Due to these cyclic connections RNNs are nonlinear dynamical systems with very rich spatial and temporal behaviors: stable and unstable fixed points, limit cycles and chaotic behavior. These behaviors make them suitable for modeling certain cognitive functions such as associative memory, unsupervised learning, self-organizing maps and temporal reasoning.

Fixed Point Theorem: If f(x) is some function of real variable with real values, the values such that f(x) = x are called the fixed points of the mapping. In general, if is a mapping from the metric space X inf: X X into itself, the fixed points of this mapping are defined as above. A fixed point theorem is a theorem showing under which conditions some mapping has a fixed point in the corresponding metric space.

Oscillations (Self-Sustained and Forced): Type of steady state behavior when the state trajectories, while remaining bounded, never reach an equilibrium but their deviations from this equilibrium keep sign changing. Usually an oscillation is viewed as having some recurrent properties, being either periodic or almost periodic. When the system is autonomous i.e. free of external oscillatory signals while nevertheless displaying an oscillatory behavior which is sustained by non-oscillatory internal factors of the system, it is said that this system displays self-sustained oscillations (the term belongs to Mandelstamm and Andronov). When the system is non-autonomous and subject to external oscillatory signals (stimuli), the limit regime that occurs is called forced oscillation.

Synchronization: Interaction phenomenon among coupled subsystems of a system resulting in some ordering of their evolution. Its maximal stage is the complete synchronization of the subsystems’ periods resulting in a periodic evolution of the state of the entire system. When a system is externally forced by an oscillatory signal, synchronization means a limit regime of the entire state, which has the same waveform as the forcing signal (periodic with the same period if the forcing signal is periodic or almost periodic if the forcing signal is such).

Phase Portrait: Term borrowed from the Poincaré theory of the phase (space) plane where this portrait is better defined. Its extension to higher order systems is mainly informal, based on geometric arguments. By phase portrait it is understood the total of state trajectories as limit regimes (equilibria, recurrent motions, limit sets) and standard trajectories e.g. defined by initial conditions.

Global Stability: An equilibrium is global (asymptotically) stable if it is the unique equilibrium of the dynamical system and the property holds globally (its domain of attraction is the entire state space).

Frequency Domain Stability Inequality of Popov: Consider a feedback structure containing a linear dynamical block with the transfer function H(s) and a nonlinear function - subject to the sector condition 0 < ?(s)s < ks2. The Popov inequality ensures absolute stability i.e. global asymptotic stability of the zero equilibrium for all nonlinear functions satisfying the above inequality and reads as follows: there exists some ß such that .

Stability: Qualitative property of the solution of a system with the significance of the limitation of the perturbations effect on the considered solution viewed as basic. Among all kinds of stability (bounded input/bounded output, Lagrange stability, Birkhoff stability, input-to-state stability) the stability in the sense of Lyapunov - with respect to the initial conditions, viewed as incorporating the effect of short-period perturbations - is the most widely used; it means that sufficiently small deviations in the initial condition (state) will result in arbitrarily small deviations in the current state at all following moments. Rigorously, the basic solution of (3) is called stable in the sense of Lyapunov if, for any arbitrarily small and any there exists some sufficiently small such that if , then for all . If in the above definition is independent of the initial moment the stability is called uniform; from the point of view of the practice, this is the more important stability notion of stability. It is also a necessary condition for uniform asymptotic stability (see above).

Asymptotic stability: The solution of (3) is called asymptotically stable if it is Lyapunov stable (see below) and, moreover, there exists such that if then .

Complete Chapter List

Search this Book:
Reset