Mathematical Modeling of Artificial Neural Networks

Mathematical Modeling of Artificial Neural Networks

Radu Mutihac
Copyright: © 2009 |Pages: 8
DOI: 10.4018/978-1-59904-849-9.ch156
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Models and algorithms have been designed to mimic information processing and knowledge acquisition of the human brain generically called artificial or formal neural networks (ANNs), parallel distributed processing (PDP), neuromorphic or connectionist models. The term network is common today: computer networks exist, communications are referred to as networking, corporations and markets are structured in networks. The concept of ANN was initially coined as a hopeful vision of anticipating artificial intelligence (AI) synthesis by emulating the biological brain. ANNs are alternative means to symbol programming aiming to implement neural-inspired concepts in AI environments (neural computing) (Hertz, Krogh, & Palmer, 1991), whereas cognitive systems attempt to mimic the actual biological nervous systems (computational neuroscience). All conceivable neuromorphic models lie in between and supposed to be a simplified but meaningful representation of some reality. In order to establish a unifying theory of neural computing and computational neuroscience, mathematical theories should be developed along with specific methods of analysis (Amari, 1989) (Amit, 1990). The following outlines a tentatively mathematical-closed framework in neural modeling.
Chapter Preview
Top

Background

ANNs may be regarded as dynamic systems (discrete or continuous), whose states are the activity patterns, and whose controls are the synaptic weights, which control the flux of information between the processing units (adaptive systems controlled by synaptic matrices). ANNs are parallel in the sense that most neurons process data at the same time. This process can be synchronous, if the processing time of an input neuron is the same for all units of the net, and asynchronous otherwise. Synchronous models may be regarded as discrete models. As biological neurons are asynchronous, they require a continuous time treatment by differential equations.

Alternatively, ANNs can recognize the state of environment and act on the environment to adapt to given viability constraints (cognitive systems controlled by conceptual controls). Knowledge is stored in conceptual controls rather than encoded in synaptic matrices, whereas learning rules describe the dynamics of conceptual controls in terms of state evolution in adapting to viability constraints.

The concept of paradigm referring to ANNs typically comprises a description of the form and functions of the processing unit (neuron, node), a network topology that describes the pattern of weighted interconnections among the units, and a learning rule to establish the values of the weights (Domany, 1988). Although paradigms differ in details, they still have a common subset of selected attributes (Jansson, 1991) like simple processing units, high connectivity, parallel processing, nonlinear transfer function, feedback paths, non-algorithmic data processing, self-organization, adaptation (learning) and fault tolerance. Some extra features might be: generalization, useful outputs from fuzzy inputs, energy saving, and potential overall high speed operation.

The digital paradigm dominating computer science assumes that information must be digitized to avoid noise interference and signal degradation. In contrast, a neuron is highly analog in the sense that its computations are based on spatiotemporal integrative processes of smoothly varying ion currents at the trigger zone rather than on bits. Yet neural systems are highly efficient and reliable information processors.

Key Terms in this Chapter

Self-Organization Principle: A process in which the internal organization of a system that continually interacts with its environment increases in complexity without being driven by an outside source. Self-organizing systems typically exhibit emergent properties.

Relaxation: Process by which ANNs minimize an objective function using semi- or non-parametric methods to iteratively updating the weights.

Artificial Neural Networks (ANNs): Highly parallel networks of interconnected simple computational elements (cells, nodes, neurons, units), which mimic biological neural network.

Emergence: Modalities in which complex systems like ANNs and patterns come out of a multiplicity of relatively simple interactions.

Paradigm of ANNs: Set of (i) pre-processing units’ form and functions, (ii) network topology that describes the number of layers, the number of nodes per layer, and the pattern of weighted interconnections among the nodes, and (iii) learning (training) rule that specifies the way weights should be adapted during use in order to improve network performance.

Robustness: Property of ANNs to accomplish reliable its tasks when handling incomplete and/or corrupted data. Moreover, the results should be consistent even if some part of the network is damaged.

Learning (Training) Rule: Iterative process of updating the weights from cases (instances) repeatedly presented as input. Learning (adaptation) is essential in PR where the training data set is limited and new environments are continuously encountered.

Complex Systems: Systems made of several interconnected simple parts which altogether exhibit a high degree of complexity from each emerges a higher order behaviour.

Complete Chapter List

Search this Book:
Reset