As the beginning of the area of artificial neural networks, the introduction of the artificial neuron of McCulloch and Pitts is considered. They were inspired by the biological neuron. Since then, many new networks or new algorithms for neural networks have been invented with the result that this area is not very clearly laid out. In most textbooks on (artificial) neural networks (Rojas, 2000; Silipo, 2002), there is no general definition on what a neural net is, but rather an example-based introduction leading from the biological model to some artificial successors. Perhaps the most promising approach to define a neural network is to see it as a network of many simple processors (units), each possibly having a small amount of local memory. The units are connected by communication channels (connections) that usually carry numeric (as opposed to symbolic) data called the weight of the connection. The units operate only on their local data and on the inputs they receive via the connections. It is typical of neural networks that they have great potential for parallelism, since the computations of the components are largely independent of each other. Neural networks work best if the system modeled by them has a high tolerance to error. Therefore, one would not be advised to use a neural network to balance one’s checkbook. However, they work very well for: • capturing associations or discovering regularities within a set of patterns; • any application where the number of variables or diversity of the data is very great; • any application where the relationships between variables are vaguely understood; or, • any application where the relationships are difficult to describe adequately with conventional approaches.