Article Preview
Top1. Introduction
Nowadays, artificial intelligence (AI) has a significant impact on our daily life. However, unfortunately, human aspects like voice, picture, video, handwritten character, etc., have not yet got considerable attention due to standard solutions (Liang & Li, 2020) (Hopfield, 1982). Associative Memory (AM) is an emerging research topic in pattern recognition that still needs an optimal solution due to the unavailability of a standard solution. Associative memory works as a content addressable memory that stores the data in a distributed manner and can be addressed through its contents. An AM has the capability to recall the complete patterns when triggered with partial or noisy patterns. Figure 1 illustrates the working of content addressable memory or associative memory. In artificial intelligence literature, associative memories are broadly classified into two types: auto-associative memory and hetero-associative memory. In an auto-associative memory, the primary focus is given to recall the perfect pattern when a distorted or noisy version of the pattern is given as input. On the other hand, Hetero-associative memory stores input-output pattern pairs in which input pattern may differ from output pattern and recalling of output pattern is triggered by a noisy or partial version of input pattern of the pair. Associative memories are usually implemented by artificial neural networks. Hopfield neural network is a widely used artificial neural network to implement auto-associative memory that mimics the functionality of the human brain (Hopfield', 1984;Hu et al., 2015)
Figure 1.
Working of content addressable memory
Hopfield Neural Network (HNN) (Hopfield, 1982) is considered as a dynamic feedback system, where the output of the previous iteration is fed as input to the next iteration. This network is also termed “recurrent networks” owning to the presence of feedback connections and tends to behave like a nonlinear dynamic system (Hopfield’, 1984), leading to the generation of multiple behavior patterns. Out of these, one potential behavior pattern leads to the stability of the network, i.e., the network converges to a fixed or motionless point. Owning to this capability of the network, the same fixed point can be treated as input as well as output for such networks (Hopfield’, 1984). This keeps the network in the same state. The network also shows oscillations or chaotic behavior.
It has been observed that Hopfield neural networks can work as a stable system with more than one fixed point (Hebb, 1949). The convergence of the network to a fixed point can be determined by the initial point chosen at the beginning of the iteration. In the case of the Hopfield neural network, these fixed points are called attractors, and a set of points attracted towards a particular attractor during the iteration is known as attraction basin. All the points, which are part of the attraction basin, are connected with an attractor. This can be understood by considering the following example, wherein a specific (desirable) image is considered as an attractor, and the basin of attraction contains the noisy or partial version of the desired image. Therefore, the noisy or partial image that vaguely recalls the desired image may be remembered by the network associated with this image. The set of these attractors are called memory, and, in this case, the network can operate as associative memory. However, HNN suffers from a large number of spurious memory attractors. The network may be stuck in these attractors and thus prevent memory attractors from being retrieved. Thus, the presence of these spurious minima’s (or false minima’s) increases the probability of error in recalling the stored patterns. If we consider the Hopfield neural network as a dynamic system, then these attractors are at minimum energy value in the energy landscape, but spurious patterns are nearer to starting point of the attraction basin.