Article Preview
Top1. Introduction
The basic functional outline aforementioned has a lot of complexity and exceptions; rather ANN models have simple characteristics and consist of thousands of processing units when wired together in a composite network. Each node is a form of a simple neuron in the network that will fire when an input signal from another node is received. Such nodes collected into different layers of processing elements make self-regulating decisions and pass on the results to other layers (McCulloch, W. S., & Pitts, W., 1943). The next layer neuron makes calculations on data and again moves output to a new layer. Every processing element computes based on the weighted sum of its inputs. The layers are the input layer, hidden layer, and the output layer; hidden layers are placed between the two layers. Figure 1 represents the working of an artificial neural network works (Minsky, M. L., & Papert, S. A.,1969; Minsky, M. L., & Papert, S. A.,1988).
Figure 1.
Weighted sum of the inputs
The input set labeled as x1, x2…..xn are applied to artificial neurons and collectively referred to as vector ‘X’ corresponds to signals into the synapse of a biological neuron. Before applying to the summation block each signal is then multiplied by an associated weight w1, w2…wn (Pitts, W., & McCulloch, W. S., 1947; Widrow, B 1961). Each weight corresponds to the strength of a single biological synaptic connection. The set of weights is referred to collectively as the vector ‘W’ and the summation block refers to the biological cell body which adds the weighted inputs algebraically to produce output labeled as SUM and represented as vector notation as:SUM=X*Wor:
SUM=x1*w1+ x2*w2+ x3*w3… xn*wn(1)Top2. Activation Functions
The activation function in artificial neural networks is that node that produces the output of that node to which set of inputs was submitted. It can be similar to a standard integrated circuit activation function that is “ON” i.e “1” or “OFF” i.e “0” according to the input. It is also alike to linear perceptron in neural networks but only nonlinear activation functions allow networks to compute non-trivial areas use only a small number of nodes such activation functions introduce nonlinearities in the network.
2.1 Sigmoid Function
Here F is called the Squashing function which is a logistic function or sigmoid function represented in figure 2. The function F is expressed mathematically as:
F(x) =1/ (1+e
-x)
(2)Figure 2.
Depiction of the Sigmoid Function
The activation function used for a non-linear gain for the artificial neuron is calculated by finding the ratio of the change in F(X) to a small change in X. Thus the gain is the slope of the wave at a specific excitations level. Here a specific activation function is used. Figure 4 describes the summation function that accepts the SUM created by activation function F and produces the output signal OUT and can be a simple linear function (Widrow, B and Angell, J.B., 1962; Widrow, B and Hoff, M.E. 1960).
Figure 3. Artificial Neural Network working model
OUT=F(SUM)OUT=1 if SUM >TOUT=0 if otherwisewhere T is a threshold constant value.