Receive a 20% Discount on All Purchases Directly Through IGI Global's Online Bookstore.

Additionally, libraries can receive an extra 5% discount. Learn More

Additionally, libraries can receive an extra 5% discount. Learn More

Chun-Cheng Peng (University of London, UK) and George D. Magoulas (University of London, UK)

DOI: 10.4018/978-1-59904-849-9.ch207

Chapter Preview

TopIn the literature, several classification schemes have been proposed to organise RNN architectures starting from different principles for the classification, i.e. some consider the loops of nodes in the hidden layers, while others take the types of output into account. For example, they can be organised into *canonical* RNNs and *dynamic MLPs* (Tsoi, 1998a); *autonomous converging* and *non-autonomous non-converging* (Bengio et al., 1993); *locally* (receiving feedback(s) from the same or directly connected layer), *output feedback*, and *fully connected* (i.e. all nodes are capable to receive and transfer feedback signals to the other nodes, even within different layers) RNNs (dos Santos & Zuben, 2000); *binary* and *analog* RNNs (Orponen, 2000).

Real-Time Recurrent Learning: A general approach to training an arbitrary recurrent network by adjusting weights along the error gradient. This algorithm usually requires very low learning rates because of the inherent correlations between successive node outputs.

Backpropagation through Time: An algorithm for recurrent neural networks that uses the gradient descent method. It attempts to train a recurrent neural network by unfolding it into a multilayer feedforward network that grows by one layer for each time step, also called unfolding of time.

Artificial Neural Network: A network of many simple processors, called “units” or “neurons”, which provides a simplified model of a biological neural network. The neurons are connected by links that carry numeric values corresponding to weightings and are usually organised in layers. Neural networks can be trained to find nonlinear relationships in data, and are used in applications such as robotics, speech recognition, signal processing or medical diagnosis.

Extended Kalman Filter: An online learning algorithm for determining the weights in a recurrent network given target outputs as it runs. It is based on the idea of Kalman filtering, which is a well-known linear recursive technique for estimating the state vector of a linear system from a set of noisy measurements.

Sequence Processing: A sequence is an ordered list of objects, events or data items. Processing of a sequence may involve one or a number of operations, such as classification of the whole sequence into a category; transformation of a sequence into another one; prediction or continuation of a sequence; generation of an output sequence from a single input.

Gradient Descent: A popular training algorithm that minimises the total squared error of the output computer by a neural network. To find a local minimum of the error function using gradient descent, one takes steps proportional to the negative of the gradient (or the approximate gradient) of the function at the current point.

Recurrent Neural Network: An artificial neural network with feedback connections. This is in contrast to what happens in a feedforward neural network, where the signal simply passes from the input neurons, through the hidden neurons, to the outputs nodes

Neural Architecture: Particular organisation of artificial neurons and connections between them in an artificial neural network.

Training Algorithm: A step-by-step procedure for adjusting the connection weights of an artificial neural network. In supervised training, the desired (correct) output for each input vector of a training set is presented to the network, and many iterations through the training data may be required to adjust the weights. In unsupervised training, the weights are adjusted without specifying the correct output for any of the input vectors.

Search this Book:

Reset

Copyright © 1988-2018, IGI Global - All Rights Reserved