Article Preview
TopIntroduction
Artificial neural networks are well known for their learning ability. They have been efficiently used for deterministic learning in the context of adaptive control (Farrell, 1998; Jiang & Wang, 2000; Rovithakis & Christodoulou, 2000; Spooner, Maggiore, Ordonez, & Passino, 2002; Ge, Hang, Lee, & Zhang, 2002; Farrell & Polycarpou, 2006) and for computational or statistical learning in the context of machine learning (Vapnik, 2000).
Recently, a deterministic learning theory (Wang, Hill, & Chen, 2003; Wang & Hill, 2006) was proposed and applied to the dynamical pattern recognition problem (Wang & Hill, 2007, 2010). Wang and Hill (2007) address the dynamical pattern recognition problem of temporal patterns generated from a dynamical system
(1) where
is the state vector,
p is a vector with system parameters and
represents the system dynamics with
fi(
x;
p) a smooth, unknown, nonlinear function.
Dynamical patterns are defined as general recurrent trajectories generated from (1) and include among others periodic, quasi-periodic or even chaotic trajectories. As described in Wang and Hill (2007) the pattern recognition process involves two main tasks: an initial identification task and a recognition task.
A deterministic learning approach based on localized radial basis function neural networks (RBF NNs) (Sanner & Slotine, 1992) is adopted in Wang, Hill, and Chen (2003) and Wang and Hill (2006) for the initial identification task. Using this learning scheme, information on the dynamical pattern is obtained and stored in the RBF NN weights. After the identification procedure, a set of dynamical models (the so called “test set”) is constructed. These models are then employed in the pattern recognition task that involves comparisons between the actual and the test patterns (generated by the test models) based on some suitable similarity measure. A detailed description of the overall methodology can be found in Wang and Hill (2007, 2010).
In this paper, we focus on the initial identification task and propose an alternative approach to deterministic learning. As a first step, an observer is designed based on the robust integral of the sign error (RISE) approach (Xian, Dawson, de Queiroz, & Chen, 2004; Patre, MacKunis, Kaiser, & Dixon, 2008; Patre, MacKunis, Makkar, & Dixon, 2008) that provides an asymptotic time estimate of the smooth vector field f(x;p). A localized neural network can then be employed to extract and store the information of this estimate.
To this end, we introduce a new class of localized neural networks called patchy neural networks (PNNs) with basis functions that are “patches” of the state space. We prove their universal approximation capability i.e., it is shown that a PNN with a sufficient number of nodes can approximate with desired accuracy over some compact region a general smooth nonlinear function.
A simple PNN is then employed to extract and store the information obtained from the observer estimate based on an easy to implement algebraic weight update law. The advantages of the proposed methodology with respect to Wang and Hill (2007, 2010) are: