Optimization of Hopfield Neural Network for Improved Pattern Recall and Storage Using Lyapunov Energy Function and Hamming Distance: MC-HNN

Optimization of Hopfield Neural Network for Improved Pattern Recall and Storage Using Lyapunov Energy Function and Hamming Distance: MC-HNN

Jay Kant Pratap Singh Yadav, Zainul Abdin Jaffery, Laxman Singh
Copyright: © 2022 |Pages: 25
DOI: 10.4018/IJFSA.296592
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper, we propose a multiconnection-based Hopfield neural network (MC-HNN) based on the hamming distance and Lyapunov energy function to address the limited storage and inadequate recalling capability problems of Hopfield Neural Network (HNN). This study uses the Lyapunov energy function and Hamming Distance to recall correct stored patterns corresponding to noisy test patterns during the convergence phase. The proposed method also extends the traditional HNN storage capacity by storing the individual patterns in the form of etalon arrays through the unique connections among neurons. Hence, the storage capacity now depends on the number of connections and is independent of the total number of neurons in the network. The proposed method achieved the average recall success rate of 100% for bit map images with a noise level of 0, 2, 4, 6 bits, which is a better recall success rate than traditional and genetic algorithm-based HNN methods, respectively. The proposed method also shows quite encouraging results on hand-written images compared with some latest state of art methods.
Article Preview
Top

1. Introduction

Nowadays, artificial intelligence (AI) has a significant impact on our daily life. However, unfortunately, human aspects like voice, picture, video, handwritten character, etc., have not yet got considerable attention due to standard solutions (Liang & Li, 2020) (Hopfield, 1982). Associative Memory (AM) is an emerging research topic in pattern recognition that still needs an optimal solution due to the unavailability of a standard solution. Associative memory works as a content addressable memory that stores the data in a distributed manner and can be addressed through its contents. An AM has the capability to recall the complete patterns when triggered with partial or noisy patterns. Figure 1 illustrates the working of content addressable memory or associative memory. In artificial intelligence literature, associative memories are broadly classified into two types: auto-associative memory and hetero-associative memory. In an auto-associative memory, the primary focus is given to recall the perfect pattern when a distorted or noisy version of the pattern is given as input. On the other hand, Hetero-associative memory stores input-output pattern pairs in which input pattern may differ from output pattern and recalling of output pattern is triggered by a noisy or partial version of input pattern of the pair. Associative memories are usually implemented by artificial neural networks. Hopfield neural network is a widely used artificial neural network to implement auto-associative memory that mimics the functionality of the human brain (Hopfield', 1984;Hu et al., 2015)

Figure 1.

Working of content addressable memory

IJFSA.296592.f01

Hopfield Neural Network (HNN) (Hopfield, 1982) is considered as a dynamic feedback system, where the output of the previous iteration is fed as input to the next iteration. This network is also termed “recurrent networks” owning to the presence of feedback connections and tends to behave like a nonlinear dynamic system (Hopfield’, 1984), leading to the generation of multiple behavior patterns. Out of these, one potential behavior pattern leads to the stability of the network, i.e., the network converges to a fixed or motionless point. Owning to this capability of the network, the same fixed point can be treated as input as well as output for such networks (Hopfield’, 1984). This keeps the network in the same state. The network also shows oscillations or chaotic behavior.

It has been observed that Hopfield neural networks can work as a stable system with more than one fixed point (Hebb, 1949). The convergence of the network to a fixed point can be determined by the initial point chosen at the beginning of the iteration. In the case of the Hopfield neural network, these fixed points are called attractors, and a set of points attracted towards a particular attractor during the iteration is known as attraction basin. All the points, which are part of the attraction basin, are connected with an attractor. This can be understood by considering the following example, wherein a specific (desirable) image is considered as an attractor, and the basin of attraction contains the noisy or partial version of the desired image. Therefore, the noisy or partial image that vaguely recalls the desired image may be remembered by the network associated with this image. The set of these attractors are called memory, and, in this case, the network can operate as associative memory. However, HNN suffers from a large number of spurious memory attractors. The network may be stuck in these attractors and thus prevent memory attractors from being retrieved. Thus, the presence of these spurious minima’s (or false minima’s) increases the probability of error in recalling the stored patterns. If we consider the Hopfield neural network as a dynamic system, then these attractors are at minimum energy value in the energy landscape, but spurious patterns are nearer to starting point of the attraction basin.

Complete Article List

Search this Journal:
Reset
Volume 13: 1 Issue (2024)
Volume 12: 1 Issue (2023)
Volume 11: 4 Issues (2022)
Volume 10: 4 Issues (2021)
Volume 9: 4 Issues (2020)
Volume 8: 4 Issues (2019)
Volume 7: 4 Issues (2018)
Volume 6: 4 Issues (2017)
Volume 5: 4 Issues (2016)
Volume 4: 4 Issues (2015)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing