On Temporal Summation in Chaotic Neural Network with Incremental Learning

On Temporal Summation in Chaotic Neural Network with Incremental Learning

Toshinori Deguchi (Gifu National College of Technology, Gifu, Japan), Toshiki Takahashi (Gifu National College of Technology, Gifu, Japan) and Naohiro Ishii (Aichi Institute of Technology, Toyota, Japan)
Copyright: © 2014 |Pages: 13
DOI: 10.4018/ijsi.2014100106


The incremental learning is a method to compose an associate memory using a chaotic neural network and provides larger capacity than correlative learning in compensation for a large amount of computation. A chaotic neuron has spatiotemporal summation in it and the temporal summation makes the learning stable to input noise. When there is no noise in input, the neuron may not need temporal summation. In this paper, to reduce the computations, a simplified network without temporal summation is introduced and investigated through the computer simulations comparing with the network as in the past, which is called here the usual network. It turns out that the simplified network has the same capacity in comparison with the usual network and can learn faster than the usual one, but that the simplified network loses the learning ability in noisy inputs. To improve this ability, the parameters in the chaotic neural network are adjusted.
Article Preview

Chaotic Neural Networks And Incremental Learning

The incremental learning was developed by using the chaotic neurons. The chaotic neurons and the chaotic neural networks were proposed by Aihara (Aihara, Tanabe &Toyoda, 1990).

We presented the incremental learning that provides an associative memory (Asakawa, Deguchi & Ishii, 2001; Deguchi & Ishii, 2004; Deguchi, Fukuta & Ishii, 2013; Deguchi, Matsuo, Kimura & Ishii, 2009-07; Deguchi, Takahashi & Ishii, 2014). The network type is an interconnected network, in which each neuron receives one external input, and is defined as follows (Aihara, Tanabe & Toyoda, 1990):

, (1)
, (2)
, (3)
, (4) where is the output of the -th neuron at time , is the output sigmoid function described below in (5), , , are the time decay constants, is the input to the -th neuron at time , is the weight for external inputs, is the size—the number of the neurons in the network, is the connection weight from the -th neuron to the -th neuron, and is the parameter that specifies the relation between the neuron output and the refractoriness.

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 8: 4 Issues (2020): Forthcoming, Available for Pre-Order
Volume 7: 4 Issues (2019): 2 Released, 2 Forthcoming
Volume 6: 4 Issues (2018)
Volume 5: 4 Issues (2017)
Volume 4: 4 Issues (2016)
Volume 3: 4 Issues (2015)
Volume 2: 4 Issues (2014)
Volume 1: 4 Issues (2013)
View Complete Journal Contents Listing