Functional Networks

Functional Networks

Oscar Fontenla-Romero, Bertha Guijarro-Berdiñas, Beatriz Pérez-Sánchez
Copyright: © 2009 |Pages: 10
DOI: 10.4018/978-1-59904-849-9.ch101
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Functional networks are a generalization of neural networks, which is achieved by using multiargument and learnable functions, i.e., in these networks the transfer functions associated with neurons are not fixed but learned from data. In addition, there is no need to include parameters to weigh links among neurons since their effect is subsumed by the neural functions. Another distinctive characteristic of these models is that the specification of the initial topology for a functional network could be based on the features of the problem we are facing. Therefore knowledge about the problem can guide the development of a network structure, although on the absence of this knowledge always a general model can be used. In this article we present a review of the field of functional networks, which will be illustrated with practical examples.
Chapter Preview
Top

Background

Artificial Neural Networks (ANN) are a powerful tool to build systems able to learn and adapt to their environment, and they have been successfully applied in many fields. Their learning process consists of adjusting the values of their parameters, i.e., the weights connecting the network’s neurons. This adaptation is carried out through a learning algorithm that tries to adjust some training data representing the problem to be learnt. This algorithm is guided by the minimization of some error function that measures how well the ANN is adjusting the training data (Bishop, 1995). This process is called parametric learning. One of the most popular neural network models are Multilayer Perceptrons (MLP) for which many learning algorithms can be used: from the brilliant backpropagation (Rumelhart, Hinton & Willian, 1986) to the more complex and efficient Scale Conjugate Gradient (Möller, 1993) or Levenberg-Marquardt algorithms (Hagan & Menhaj, 1994).

In addition, also the topology of the network (number of layers, neurons, connections, activation functions, etc.) has to be determined. This is called structural learning and it is carried out mostly by trial and error.

As a result, there are two main drawbacks in dealing with neural networks:

  • 1.

    The resulting function lacks of the possibility of a physical or engineering interpretation. In this sense, Neural Networks act as black boxes.

  • 2.

    There is no guarantee that the weights provided by the learning algorithm correspond to a global optimum of the error function, it can be a local one.

Models like Generalized Linear Networks (GLN) present an unique global optimum that can be obtained by solving a set of linear equations. However, its mapping function is limited as this model consists of a single layer of adaptive weights (wj) to produce a linear combination of non linear functions (ϕj):978-1-59904-849-9.ch101.m01.Some other popular models are Radial Basis Function Networks (RBF) whose hidden units use distances to a prototype vector (µj) followed by a transformation with a localized function like the Gaussian:

978-1-59904-849-9.ch101.m02.

The resulting architecture is more simple than the one of the MLP, therefore reducing the complexity of structural learning and propitiating the possibility of physical interpretation. However, they present some other limitations like their inability to distinguish non significant input variables (Bishop, 1995), to learn some logic transformations (Moody & Darken, 1989) or the need of a large number of nodes even for a linear map if precision requirement is high (Youssef, 1993).

Complete Chapter List

Search this Book:
Reset