On Temporal Summation in Chaotic Neural Network with Incremental Learning

On Temporal Summation in Chaotic Neural Network with Incremental Learning

Toshinori Deguchi (Gifu National College of Technology, Gifu, Japan), Toshiki Takahashi (Gifu National College of Technology, Gifu, Japan) and Naohiro Ishii (Aichi Institute of Technology, Toyota, Japan)
Copyright: © 2014 |Pages: 13
DOI: 10.4018/ijsi.2014100106

Abstract

The incremental learning is a method to compose an associate memory using a chaotic neural network and provides larger capacity than correlative learning in compensation for a large amount of computation. A chaotic neuron has spatiotemporal summation in it and the temporal summation makes the learning stable to input noise. When there is no noise in input, the neuron may not need temporal summation. In this paper, to reduce the computations, a simplified network without temporal summation is introduced and investigated through the computer simulations comparing with the network as in the past, which is called here the usual network. It turns out that the simplified network has the same capacity in comparison with the usual network and can learn faster than the usual one, but that the simplified network loses the learning ability in noisy inputs. To improve this ability, the parameters in the chaotic neural network are adjusted.
Article Preview
Top

Chaotic Neural Networks And Incremental Learning

The incremental learning was developed by using the chaotic neurons. The chaotic neurons and the chaotic neural networks were proposed by Aihara (Aihara, Tanabe &Toyoda, 1990).

We presented the incremental learning that provides an associative memory (Asakawa, Deguchi & Ishii, 2001; Deguchi & Ishii, 2004; Deguchi, Fukuta & Ishii, 2013; Deguchi, Matsuo, Kimura & Ishii, 2009-07; Deguchi, Takahashi & Ishii, 2014). The network type is an interconnected network, in which each neuron receives one external input, and is defined as follows (Aihara, Tanabe & Toyoda, 1990):

ijsi.2014100106.m01
, (1)
ijsi.2014100106.m02
, (2)
ijsi.2014100106.m03
, (3)
ijsi.2014100106.m04
, (4) where ijsi.2014100106.m05 is the output of the ijsi.2014100106.m06-th neuron at time ijsi.2014100106.m07, ijsi.2014100106.m08 is the output sigmoid function described below in (5), ijsi.2014100106.m09, ijsi.2014100106.m10, ijsi.2014100106.m11 are the time decay constants, ijsi.2014100106.m12 is the input to the ijsi.2014100106.m13-th neuron at time ijsi.2014100106.m14, ijsi.2014100106.m15 is the weight for external inputs, ijsi.2014100106.m16 is the size—the number of the neurons in the network, ijsi.2014100106.m17 is the connection weight from the ijsi.2014100106.m18-th neuron to the ijsi.2014100106.m19-th neuron, and ijsi.2014100106.m20 is the parameter that specifies the relation between the neuron output and the refractoriness.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2020): 2 Released, 2 Forthcoming
Volume 7: 4 Issues (2019)
Volume 6: 4 Issues (2018)
Volume 5: 4 Issues (2017)
Volume 4: 4 Issues (2016)
Volume 3: 4 Issues (2015)
Volume 2: 4 Issues (2014)
Volume 1: 4 Issues (2013)
View Complete Journal Contents Listing