Recognition of Alphanumeric Patterns Using Backpropagation Algorithm for Design and Implementation With ANN

Recognition of Alphanumeric Patterns Using Backpropagation Algorithm for Design and Implementation With ANN

Alankrita Aggarwal, Shivani Gaba, Shally Chawla, Anoopa Arya
DOI: 10.4018/IJSPPC.295086
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The artificial neural network has been called for its application as alphanumeric characters recognizing the network. The idea is to maintain the obsolete data available in hard copy form and to convert that data into digital form. Some specific bit patterns that correspond to the character are trained using the network. The numbers of input and output layer neurons are chosen. As there are many ways to do but in this the algorithm used for training the network is called the Backpropagation Algorithm using the delta rule. The testing and training patterns are provided for which weights are calculated in the program and patterns are recognized and analysis is done. The effect of variations in the hidden layers is also observed with pattern matrices.
Article Preview
Top

1. Introduction

The basic functional outline aforementioned has a lot of complexity and exceptions; rather ANN models have simple characteristics and consist of thousands of processing units when wired together in a composite network. Each node is a form of a simple neuron in the network that will fire when an input signal from another node is received. Such nodes collected into different layers of processing elements make self-regulating decisions and pass on the results to other layers (McCulloch, W. S., & Pitts, W., 1943). The next layer neuron makes calculations on data and again moves output to a new layer. Every processing element computes based on the weighted sum of its inputs. The layers are the input layer, hidden layer, and the output layer; hidden layers are placed between the two layers. Figure 1 represents the working of an artificial neural network works (Minsky, M. L., & Papert, S. A.,1969; Minsky, M. L., & Papert, S. A.,1988).

Figure 1.

Weighted sum of the inputs

IJSPPC.295086.f01

The input set labeled as x1, x2…..xn are applied to artificial neurons and collectively referred to as vector ‘X’ corresponds to signals into the synapse of a biological neuron. Before applying to the summation block each signal is then multiplied by an associated weight w1, w2…wn (Pitts, W., & McCulloch, W. S., 1947; Widrow, B 1961). Each weight corresponds to the strength of a single biological synaptic connection. The set of weights is referred to collectively as the vector ‘W’ and the summation block refers to the biological cell body which adds the weighted inputs algebraically to produce output labeled as SUM and represented as vector notation as:SUM=X*Wor:

SUM=x1*w1+ x2*w2+ x3*w3… xn*wn(1)Top

2. Activation Functions

The activation function in artificial neural networks is that node that produces the output of that node to which set of inputs was submitted. It can be similar to a standard integrated circuit activation function that is “ON” i.e “1” or “OFF” i.e “0” according to the input. It is also alike to linear perceptron in neural networks but only nonlinear activation functions allow networks to compute non-trivial areas use only a small number of nodes such activation functions introduce nonlinearities in the network.

2.1 Sigmoid Function

Here F is called the Squashing function which is a logistic function or sigmoid function represented in figure 2. The function F is expressed mathematically as:

F(x) =1/ (1+e-x)(2)
Figure 2.

Depiction of the Sigmoid Function

IJSPPC.295086.f02

The activation function used for a non-linear gain for the artificial neuron is calculated by finding the ratio of the change in F(X) to a small change in X. Thus the gain is the slope of the wave at a specific excitations level. Here a specific activation function is used. Figure 4 describes the summation function that accepts the SUM created by activation function F and produces the output signal OUT and can be a simple linear function (Widrow, B and Angell, J.B., 1962; Widrow, B and Hoff, M.E. 1960).

Figure 3.

Artificial Neural Network working model

IJSPPC.295086.f03
OUT=F(SUM)OUT=1 if SUM >TOUT=0 if otherwisewhere T is a threshold constant value.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 15: 1 Issue (2023)
Volume 14: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 13: 4 Issues (2021)
Volume 12: 4 Issues (2020)
View Complete Journal Contents Listing