Receive a 20% Discount on All Titles Ordered Through IGI Global's Online Bookstore Including All Electronic Resources
Ideal for Online Learning Environments During the COVID-19 Pandemic

View All Titles

View All Titles

Ingrid Fischer (University of Konstanz, Germany)

Copyright: © 2009
|Pages: 6

DOI: 10.4018/978-1-60566-010-3.ch217

Chapter Preview

TopAs the beginning of the area of artificial neural networks the introduction of the artificial neuron by McCulloch and Pitts is considered. They were inspired by the biological neuron. Since then many new networks or new algorithms for neural networks have been invented with the result. In most textbooks on (artificial) neural networks there is no general definition on what a neural net is but rather an example based introduction leading from the biological model to some artificial successors. Perhaps the most promising approach to define a neural network is to see it as a network of many simple processors (“units”), each possibly having a small amount of local memory. The units are connected by communication channels (“connections”) that usually carry numeric (as opposed to symbolic) data called the weight of the connection. The units operate only on their local data and on the inputs they receive via the connections. It is typical of neural networks, that they have great potential for parallelism, since the computations of the components are largely independent of each other. Typical application areas are:

*•*Capturing associations or discovering regularities within a set of patterns;

*•*Any application where the number of variables or diversity of the data is very great;

*•*Any application where the relationships between variables are vaguely understood; or,

*•*Any application where the relationships are difficult to describe adequately with conventional approaches.

Neural networks are not programmed but can be trained in different ways. In supervised learning, examples are presented to an initialized net. From the input and the output of these examples, the neural net learns. There are as many learning algorithms as there are types of neural nets. Also learning is motivated physiologically. When an example is presented to a neural network it cannot recalculate, several different steps are possible: the neuron’s data is changed, the connection’s weight is changed or new connections and/or neurons are inserted. Introductory books into neural networks are (Graupe, 2007; Colen, Kuehn & Sollich, 2005).

There are many advantages and limitations to neural network analysis and to discuss this subject properly one must look at each individual type of network. Nevertheless there is one specific limitation of neural networks potential users should be aware of. Neural networks are more or less, depending on the different types, the ultimate “black boxes”. The final result of the learning process is a trained network that provides no equations or coefficients defining a relationship beyond its own internal mathematics.

Graphs are widely used concepts within computer science, in nearly every field graphs serve as a tool for visualization, summarization of dependencies, explanation of connections, etc. Famous examples are all kinds of different nets and graphs as e.g. semantic nets, petri nets, flow charts, interaction diagrams or neural networks, the focus of this chapter. Invented 35 years ago, graph transformations have been constantly expanding. Wherever graphs are used, graph transformations are also applied (Rozenberg, 1997; Ehrig, Engels, Kreowski, and Rozenberg, 1999; Ehrig, Kreowski, Montanari, and Rozenberg, 1999; Ehrig, Ehrig, Prange & Taentzer, 2006).

Graph transformations are a very promising method for modeling and programming neural networks. The graph part is automatically given as the name “neural network” already indicates. Having graph transformations as methodology, it is easy to model algorithms on this graph structure. Structure preserving and structure changing algorithms can be modeled equally well. This is not the case for the widely used matrices programmed mostly in *C* or *C++*. In these approaches modeling structure change becomes more difficult.

This directly leads to a second advantage. Graph transformations have proven useful for visualizing the network and its algorithms. Most modern neural network simulators have some kind of visualization tool. Graph transformations offer a basis for this visualization as the algorithms are already implemented in visual rules.

Search this Book:

Reset

Copyright © 1988-2020, IGI Global - All Rights Reserved