Evaluating Scalability of Neural Configurations in Combined Classifier and Attention Models

Evaluating Scalability of Neural Configurations in Combined Classifier and Attention Models

Tsvi Achler
Copyright: © 2013 |Pages: 13
DOI: 10.4018/978-1-4666-3942-3.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The brain’s neuronal circuits that are responsible for recognition and attention are not completely understood. Several potential circuits have been proposed using different mechanisms. These models may vary in the number connection parameters, the meaning of each connection weight, the efficiency, and the ability to scale to larger networks. Explicit analysis of these issues is important because for example, certain models may require an implausible number of connections (greater than available in the brain) in order to process the amount of information the brain can process. Moreover certain classifiers may perform recognition, but may be difficult to efficiently integrate with attention models. In this chapter, some of the limitations and scalability issues are discussed and a class of models that may address them is suggested. The focus is on modeling both recognition and a form attention called biased competition. Models are also explored that are both static and dynamic during recognition.
Chapter Preview
Top

Background

Neural Network Configurations and Recognition Algorithms

Many types of neural network classifiers and attention models can be found in the literature. Fortunately most can be described using a standard notation. Thus before we review begin with a review of classifiers, let’s define a standard notation. Let vector Y represent the activity of a set of labeled nodes that may be called output neurons or classes in different literatures and individually written as

Y=(Y1,Y2,Y3,YH)T.

They are considered supervised if the nodes can be labeled for example: Y1 represents “dog”, Y2 represents “cat”, and so on. Vector X represents sensory neurons or nodes that sample the environment and represent the input space to be recognized. These nodes represent input features and written as X=(X1,X2,X3,... XN)T. The input features can be sensors that detect edges, lines, frequencies, kernel features, and so on. Output neurons are tuned to specific patterns so let’s expand the notation further. Assume neuron Y1 is most-optimally tuned to an input pattern “A”. We will label the neuron with this pattern and write it as neuron YA. We describe its optimal input pattern as XA where pattern “A” is represented by the feature sensory nodes with values XA=(X1A,X2A,X3A,... XNA)T.

Next, lets distinguish between supervised and unsupervised methods and phases in recognition. Unsupervised algorithms learn patterns without label constraints on Y and may find efficient representations (e.g. Olshausen & Fields, 1996; Hyvärinen et al., 2009). Although the unsupervised methods perform essential roles in dimensionality reduction and efficient coding, recognition is ultimately a process of associating labels with patterns. Without labeled associations the brain cannot interact with the world, e.g.: find food, mates, and hazards. Thus we initially focus on single-layer supervised models here. Future work will include comparisons of methods that include hidden layers and mixtures of supervised and unsupervised methods.

To distinguish between supervised recognition models, let’s define two phases in algorithm function: during recognition and learning. Recognition is when an algorithm finds values for outputs Y without modifying connection weights. Learning is when an algorithm modifies its connection weights (regardless of whether it calculates Y in the process). The most crucial aspect to understanding the difference between the models is the underlying neural configuration and when the model displays dynamics. Figure 1 compares basic neural connections between inputs and outputs.

Figure 1.

Comparison of Configurations. Using feedforward methods, information flows from inputs to output nodes (top). After feedforward processing lateral connections may connect between outputs or back to inputs of a different node (middle). Generative methods use Auto-Associative connections with symmetrical feedforward and feedback connections: each node projects back to its own inputs (bottom).

978-1-4666-3942-3.ch012.f01

It is important to keep in mind the distinction of phases in algorithm function otherwise these configurations can be confusing. For example even though feedforward methods are feedforward during recognition, to implement their learning algorithms such as backprop (Rumelhart & McClelland, 1986) or a delta rule (Rosenblatt, 1958), auto-associative feedforward-feedback connections and dynamics are required. Thus feedforward methods are feedforward during recognition but feedforward-feedback during learning. Conversely, methods that are feedforward-feedback during recognition may use simple feedforward configurations during learning (Achler, 2012). Now we are ready to evaluate recognition algorithms.

Complete Chapter List

Search this Book:
Reset