Faster Self-Organizing Fuzzy Neural Network Training and Improved Autonomy with Time-Delayed Synapses for Locally Recurrent Learning

Faster Self-Organizing Fuzzy Neural Network Training and Improved Autonomy with Time-Delayed Synapses for Locally Recurrent Learning

Damien Coyle, Girijesh Prasad, Martin McGinnity
DOI: 10.4018/978-1-60960-018-1.ch008
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

This chapter describes a number of modifications to the learning algorithm and architecture of the self-organizing fuzzy neural network (SOFNN) to improve its computational efficiency and learning ability. To improve the SOFNN’s computational efficiency, a new method of checking the network structure after it has been modified is proposed. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron’s firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking network structure changes, to ensure performance degradation does not occur, resulting in significantly reduced training times. It is shown that the modified SOFNN compares favorably to other evolving fuzzy systems in terms of accuracy and structural complexity. In addition, a new architecture of the SOFNN is proposed where recurrent feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from data and, in contrast to pure feed forward architectures which exhibit static input-output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time-series) and are therefore better suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight is updated online (not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification can significantly improve the performance of the SOFNN’s prediction capacity under certain constraints.
Chapter Preview
Top

Introduction

Over recent years there has been significant emphasis on developing self-organizing fuzzy systems that continuously evolve and adapt to non-stationary dynamics in complex datasets (Leng et al, 2004; Leng, 2003; Wu & Er, 2000; Kasabov, 2003; Angelov & Filev, 2004; Lughofer & Klement, 2005; Kasabov, 2001; Kasabov & Song, 2002).Many of the developments have been used successfully for applications such as function approximation, system identification and time-series prediction and are often tested on benchmark problems such as the two-input non-linear-sinc problem, Mackey-Glass time-series prediction and others (Leng et al., 2004; Leng, 2003; Wu & Er, 2000; Kasabov, 2003; Jang et al., 1997. An example of a network with an online, self-organizing training algorithm is the self-organizing fuzzy neural network (SOFNN) (Coyle et al., 2006; Coyle et al., 2009; Leng, 2003; Leng et al., 2004; Prasad et al., 2010). The SOFNN is capable of self-organizing its architecture, adding and pruning neurons as required. New neurons are added to cluster new data that the existing neurons are unable to cluster. Inevitably, if the data is highly non-linear and non-stationary, as training progresses and more data are fed to the network, the structure complexity increases and the training efficiency begins to degrade. This is due to the necessity to calculate each neuron’s firing strength for a greater number of neurons for all data previously presented to the network, to ensure that changes to the network do not affect the networks ability to optimally cope with older, previously learned data dynamics. This problem occurs if the structure update algorithm depends on information contained in the error derived from the existing network, which is often the case, and is the case with the SOFNN. The problem is amplified if the neurons contain fuzzy membership functions (MFs) which are expensive to compute e.g., the exponential function.

There are algorithms in the literature which have tackled this problem however fuzzy based approaches have a structure which restricts utilization of many of these proposed techniques. For example, the growing and pruning radial basis function network (GAP-RBFN) (Huang et al., 2004) enables determination of the effect of neuron removal on the whole structure by only checking that neuron’s contribution to the network. For the fuzzy based approaches such as the SOFNN (Coyle et al. 2009; Leng et al., 2004; Leng, 2003) this method cannot be applied because the fourth layer of the network contains the consequent part of the fuzzy rules in the fuzzy inference system (FIS) which employs a global learning strategy. A single fuzzy rule does not determine system output alone; it works conjunctively with all fuzzy rules in the FIS therefore the SOFNN error cannot be determined by removing a neuron and implementing the same technique described for the GAP-RBFN (Huang et al., 2004). Also, as the fuzzy rules of the SOFNN are based on zero or first order Takagi-Sugeno(TS) models, the linear parameters (consequent parameters of the fuzzy rules) are updated using the recursive least squares estimator (RLSE) however; if the structure is modified during training, the parameters must be estimated using the LSE, therefore the SOFNN is only recursive if the network structure remains unchanged. There have been a number of approaches developed for online recursive estimation of the consequence parameters and local learning rules (Kasabov, 2003; Angelov & Filev, 2004; Lughofer & Klement, 2005; Kasabov, 2001; Kasabov & Song, 2002) although in this study it is shown that these algorithms are not as accurate as the SOFNN, even though they normally evolve complex structures.

Complete Chapter List

Search this Book:
Reset