Article Preview
Top1. Introduction
Reservoir Computing (Maass, Natschläger & Markram, 2002; Jaeger, Maass & Principe, 2007) design methodology consists of building a neural network that is divided into three parts. The first part is an input layer. The second part is the reservoir, namely a pool, consisting of randomly and recurrently connected nodes with fixed connections’ weights. These nodes can be any type of spiking neurons. The third part is an output layer consisting of a read out mechanism that is able to read the transient activity of the pool (i.e. the activation of every node in the pool) and perform a classification task accordingly. In the context of Reservoir Computing, an input should be a time varying signal. Furthermore, the connections from the input layer to the pool, and the connections between the neurons that constitute the pool, have weights that are randomly set. All these connections maintain fixed weights during training (i.e. their weights are kept fixed and won’t undergo any changes). The connections from the pool to the output layer are also randomly set, but they have flexible weights; this means, they are the only connections of the network that undergo training (i.e. their weights are updated during training). Fundamental models of Reservoir Computing are the Liquid State Machines (LSM) (Maass, Natschläger & Markram, 2002; Maass & Markram, 2004), Echo State Networks (ESN) (Jaeger, Maass & Principe, 2007) and Nonlinear Transient Computation (NTC) (Crook, 2007). Inspired by the work of Maass and Crook, we implement the chaotic Liquid State Machine model that is presented herein. First, the Chaotic LSM incorporates a synaptic plasticity mechanism inside the liquid layer, which is an approach suggested by Mass et al. (2002) to enhance the ‘Separation Property’ of the machine. Second, the Chaotic LSM uses a minimal number of chaotic spiking neurons inside the liquid layer; an approach suggested by Crook (2007), which can offer a substitute to the nonlinear dynamics offered by a large number of simple Leaky Integrate and Fire (LIF) neurons that are commonly used in the traditional design of a LSM. Third, the Chaotic LSM implements the theory of Delay Feedback Control (DFC) (Pyragas, 2003); on the neurons connections, to stabilize the chaotic dynamics of the liquid when the latter is fed with external input. In such chaos control scheme, the chaotic LSM operates on the critical region between chaos and order (i.e. operates on the edge of chaos (Langton, 1990; Natschläger, Bertschinger & Legenstein, 2004)); which can contribute to its generalization capability, especially since it is combined with synaptic plasticity, synaptic scaling and the use of one class similar training inputs (Maass, Legenstein & Bertschinger, 2005; Legenstein & Maass, 2006; Legenstein & Maass, 2007).