Article Preview
Top1. Introduction
The research of neural networks has experienced resurgence during three decades and is believed to be initiated by several seminal works in the area of recurrent neural networks (RNNs). The hallmark of an RNN, in contrast to feed-forward neural networks, is the existence of connections from posterior layer(s) to anterior layer(s) or connections among neurons in the same layer. Because of these connections, the networks become dynamic systems, which bring many promising capabilities that the feed-forward counterparts do not possess. One of the obvious capabilities of RNNs is that they can handle temporal information directly and naturally, whereas feed-forward networks have to convert the patterns from temporal domain into spatial domain first for further processing. Two other distinguished capabilities possessed by RNNs refer to associative memory and optimization. The field of RNNs has evolved rapidly in recent years. It has become a fusion of a number of research areas in engineering, computer science, mathematics, artificial intelligence, operations research, systems theory, biology, and neuroscience. RNNs have been widely applied for control, optimization, pattern-recognition, image processing, and signal processing (Rovithakis & Christodoulou, 2000).
Recent research investigations have pointed out the key role of signal transmission delays to the extent that they might cause instability and oscillatory behavior of the neural networks (Arik, 2002) or might lead to poor performance. Typically, time-delays are inevitably encountered in RNNs since the interactions between different neurons are asynchronous. Therefore, stability analysis of RNNs with time-delays has been the subject of numerous studies and many results have been appeared in the literature including existence of periodic solutions, global asymptotic stability and global exponential stability, see Cao and Wang (2003, 2005), Cao and Ho (2005), Chen et al. (2006a, 2006b), Haykin (1994), He et al. (2007), Hu and Wang (2006), Jagannathan and Lewis (1996), Jin et al. (1994), Liang et al. (2005), Liao and Wang (2000), and Liu et al. (2007).
RNNs might be dealt with in continuous-time or discrete-time manner. It has been pointed out in Mohamed and Gopbalsamy (2003) that the discretization process cannot preserve the dynamics of th and let F(z),y1(z),y2(z),…,yk(z) be some functionals or functions. Define domain D asD{z∈Z:y1(z)≥0, y2(z)≥0,…,yk(z)≥0}and the two following conditions:
- (I)
F(z)>0,∀z∈D,
- (II)
∃ε1≥0, ε2≥0,…,εk≥0
such that
Then (II) implies (I).
This procedure, whenever applicable, is useful in converting non-strict LMIs into strict LMIs. Sometimes, the arguments of a function will be omitted when no confusion can arise.