### Neural Network

A brief overview of neural networks is given.

In the early 1940s, the pioneers of the field, McCulloch and Pitts, proposed a computational model based on a simple neuron-like element (McCulloch & Pitts, 1943). Since then, various types of neurons and neural networks have been developed independently of their direct similarity to biological neural networks. They can now be considered as a powerful branch of present science and technology.

Neurons are the atoms of neural computation. Out of those simple computational neurons all neural networks are build up. An illustration of a (real-valued) neuron is given in Figure 1. The activity of neuron *n* is defined as:, *(1)*

*Figure 1. *Real-valued neuron model. Weights W_{nm}, m = 1, ..., N and threshold V_{n} are all real numbers. The activation function f is a real function

where

*W*_{nm} is the real-valued weight connecting neuron

*n* and

*m*,

*X*_{m} is the real-valued input signal from neuron

*m*, and

*V*_{n} is the real-valued threshold value of neuron

*n*. Then, the output of the neuron is given by

*f*(

*x*). Although several types of activation functions

*f* can be used, the most commonly used are the sigmoidal function and the hyperbolic tangent function.

Neural networks can be grouped into two categories: feedforward networks in which graphs have no loops, and recurrent networks where loops occur because of feedback connections. A feedforward type network is made up a certain number of neurons, arranged in layers, and connected with each other through links whose values determine the weight of the connections themselves. Each neuron in a layer is connected to all of the neurons belonging to the following layer and to all of the neurons of the preceding layer. However, there are no weights among neurons in the same layer. The feedforward network can be trained using a certain learning rule to achieve the desired mapping of the input data so as to match the desired target at the network output. The most popular learning rule is the back-propagation learning algorithm (Rumelhart, Hinton, & Williams, 1986). It is well-known that the feedforward neural network can generalize unlearned input data. The characteristic is called the generalization property.