An Approach to Artificial Concept Learning Based on Human Concept Learning by Using Artificial Neural Networks

An Approach to Artificial Concept Learning Based on Human Concept Learning by Using Artificial Neural Networks

Enrique Mérida-Casermeiro, Domingo López-Rodríguez, J.M. Ortiz-de-Lazcano-Lobato
DOI: 10.4018/978-1-59904-996-0.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, two important issues concerning associative memory by neural networks are studied: a new model of hebbian learning, as well as the effect of the network capacity when retrieving patterns and performing clustering tasks. Particularly, an explanation of the energy function when the capacity is exceeded: the limitation in pattern storage implies that similar patterns are going to be identified by the network, therefore forming different clusters. This ability can be translated as an unsupervised learning of pattern clusters, with one major advantage over most clustering algorithms: the number of data classes is automatically learned, as confirmed by the experiments. Two methods to reinforce learning are proposed to improve the quality of the clustering, by enhancing the learning of patterns relationships. As a related issue, a study on the net capacity, depending on the number of neurons and possible outputs, is presented, and some interesting conclusions are commented.
Chapter Preview

As a related issue, a study on the net capacity, depending on the number of neurons and possible outputs, is presented, and some interesting conclusions are commented.

Top

Introduction

Hebb (1949) introduced a physiological learning method based on the reinforcement of the interconnection strength between neurons. It was explained in the following terms:

When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.

This kind of learning method has been widely applied to recurrent networks in order to store and retrieve patterns in terms of their similarity. Models that used this learning rule were the bipolar model (BH) presented by J. J. Hopfield in 1982 (Hopfield, 1982) representing a powerful neural model for content addressable memory, or its analogical version, among others. These networks, although successful in solving many combinatorial optimization problems, present two main problems when used as content-addressable memory: their low capacity and the apparition of spurious patterns.

The capacity parameter α is usually defined as the quotient between the maximum number of patterns to load into the network and the number of used neurons that obtains an acceptable error probability (usually perror=0.05 or 0.01). It has been shown that this constant is approximately α=0.15 for BH.

This value means that, in order to load K patterns, more than K/α neurons will be needed to achieve an error probability lower than or equal to perror. Or equivalently, if the net is formed by N neurons, the maximum number of patterns that can be loaded in the net (with that error constraint) is K<α N.

Since patterns are associated to states of the network with minimal energy, we wonder about what happens with these states if the network capacity is exceeded.

The main idea of this chapter holds that when patterns are very close each other, or if the net capacity is exceeded, then local minima corresponding to similar patterns tend to be combined, forming one unique local minimum. So, although considered as a limitation of the net as associative memory, this fact can explain the way in which the human brain form concepts: several patterns, all of them similar to a common typical representative, are associated and form a group in which particular features are not distinguishable.

Obviously, enough samples are needed to generalize and not to distinguish their particular features in both cases: artificial and natural (human) concept learning. If there are few samples from some class, they will still be retrieved by the net individually, that is, as an associative memory.

Top

Neural Background

Associative memory has received much attention for the last two decades. Though numerous models have been developed and investigated, the most influential is Hopfield’s Associative Memory, based on his bipolar model (Hopfield, 1982). This kind of memory arises as a result of his studies on collective computation in neural networks.

Hopfield’s model consists in a fully-interconnected series of bi-valued neurons (outputs are either -1 or +1). Neural connection strength is expressed in terms of weight matrix W = (wi,j), where wi,j represents the synaptic connection between neurons i and j. This matrix is determined in the learning phase by applying Hebb’s postulate of learning Hebb, and no further synaptic modification is considered later.

Two main problems arise in this model: the apparition of spurious patterns and its low capacity.

Spurious patterns are stable states, that is, local minima of the corresponding energy function of the network, not associated to any stored (input) pattern. The simplest, but not the least important, case of apparition of spurious patterns is the fact of storing, given a pattern, its opposite, i.e. both X and -X are stable states for the net, but only one of them has been introduced as an input pattern.

Complete Chapter List

Search this Book:
Reset