An Observer Approach for Deterministic Learning Using Patchy Neural Networks with Applications to Fuzzy Cognitive Networks

An Observer Approach for Deterministic Learning Using Patchy Neural Networks with Applications to Fuzzy Cognitive Networks

H. E. Psillakis (Technological and Educational Institute of Crete, Greece), M. A. Christodoulou (Technical University of Crete, Greece), T. Giotis (Technical University of Crete, Greece) and Y. Boutalis (Democritus University of Thrace, Greece)
Copyright: © 2011 |Pages: 16
DOI: 10.4018/jalr.2011010101
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

In this paper, a new methodology is proposed for deterministic learning with neural networks. Using an observer that employs the integral of the sign of the error term, asymptotic estimation of the respective nonlinear vector field is achieved. Patchy Neural Networks (PNNs) are introduced to identify the unknown nonlinearity from the observer’s output and the state measurements. The proposed scheme achieves learning with a single pass from the respective patches and does not need standard persistency of excitation conditions. Furthermore, the PNN weights are updated algebraically, reducing the computational load of learning significantly. Simulation results for a Duffing oscillator and a fuzzy cognitive network illustrate the effectiveness of the proposed approach.
Article Preview

Introduction

Artificial neural networks are well known for their learning ability. They have been efficiently used for deterministic learning in the context of adaptive control (Farrell, 1998; Jiang & Wang, 2000; Rovithakis & Christodoulou, 2000; Spooner, Maggiore, Ordonez, & Passino, 2002; Ge, Hang, Lee, & Zhang, 2002; Farrell & Polycarpou, 2006) and for computational or statistical learning in the context of machine learning (Vapnik, 2000).

Recently, a deterministic learning theory (Wang, Hill, & Chen, 2003; Wang & Hill, 2006) was proposed and applied to the dynamical pattern recognition problem (Wang & Hill, 2007, 2010). Wang and Hill (2007) address the dynamical pattern recognition problem of temporal patterns generated from a dynamical system

(1) where is the state vector, p is a vector with system parameters and represents the system dynamics with fi(x;p) a smooth, unknown, nonlinear function.

Dynamical patterns are defined as general recurrent trajectories generated from (1) and include among others periodic, quasi-periodic or even chaotic trajectories. As described in Wang and Hill (2007) the pattern recognition process involves two main tasks: an initial identification task and a recognition task.

A deterministic learning approach based on localized radial basis function neural networks (RBF NNs) (Sanner & Slotine, 1992) is adopted in Wang, Hill, and Chen (2003) and Wang and Hill (2006) for the initial identification task. Using this learning scheme, information on the dynamical pattern is obtained and stored in the RBF NN weights. After the identification procedure, a set of dynamical models (the so called “test set”) is constructed. These models are then employed in the pattern recognition task that involves comparisons between the actual and the test patterns (generated by the test models) based on some suitable similarity measure. A detailed description of the overall methodology can be found in Wang and Hill (2007, 2010).

In this paper, we focus on the initial identification task and propose an alternative approach to deterministic learning. As a first step, an observer is designed based on the robust integral of the sign error (RISE) approach (Xian, Dawson, de Queiroz, & Chen, 2004; Patre, MacKunis, Kaiser, & Dixon, 2008; Patre, MacKunis, Makkar, & Dixon, 2008) that provides an asymptotic time estimate of the smooth vector field f(x;p). A localized neural network can then be employed to extract and store the information of this estimate.

To this end, we introduce a new class of localized neural networks called patchy neural networks (PNNs) with basis functions that are “patches” of the state space. We prove their universal approximation capability i.e., it is shown that a PNN with a sufficient number of nodes can approximate with desired accuracy over some compact region a general smooth nonlinear function.

A simple PNN is then employed to extract and store the information obtained from the observer estimate based on an easy to implement algebraic weight update law. The advantages of the proposed methodology with respect to Wang and Hill (2007, 2010) are:

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 7: 2 Issues (2017)
Volume 6: 2 Issues (2016)
Volume 5: 1 Issue (2015)
Volume 4: 1 Issue (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing