Multicriteria Synthesis of Neural Network Architecture

Multicriteria Synthesis of Neural Network Architecture

DOI: 10.4018/978-1-5225-5586-5.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Statement of a problem and procedure of vector optimization of the neural-network classifier architecture is considered. As a criterion function, the scalar convolution of criteria under the nonlinear scheme of compromises is offered. Search methods of optimization with discrete arguments are used. The example, neural-network classifier of texts, is given.
Chapter Preview
Top

Introduction

An artificial neural network is an information processing paradigm that can serve as a part of a techno-social system for modern economical and governmental infrastructures. The key to artificial neural networks is that their design enables them to process information in a similar way to our own biological brains, by drawing inspiration from how our own nervous system functions. This makes them useful tools for solving problems like language processing, data mining, etc, which our biological brains can do easily.

An artificial neural network is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system.

It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. Artificial neural networks, like people, learn by example. An artificial neural network is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of artificial neural networks as well.

Neural network classifiers are an important kind of artificial neural networks. They are applicable in technical and medical diagnostics, classification of various information sources, etc. Figure 1 shows the structure of a generalized q- layer neural network classifier with direct connections.

Figure 1.

The neural network classifier

978-1-5225-5586-5.ch004.f01

In the figure, 978-1-5225-5586-5.ch004.m01 are the attributes of the classification object that constitute the input vector 978-1-5225-5586-5.ch004.m02 is the number of neural elements in the receptor layer; 978-1-5225-5586-5.ch004.m03 is the number of neurons in each of the hidden (processing) layers q; 978-1-5225-5586-5.ch004.m04 is the number of neurons in the output layer (the number of classes); 978-1-5225-5586-5.ch004.m05 is the output vector of the neural network that assigns the classification object to one of the 978-1-5225-5586-5.ch004.m06 classes; 978-1-5225-5586-5.ch004.m07 are the vectors of the synaptic weights of the neural network.

Let us present the necessary information from neural networks theory (Bodyansky, E.V. & Rudenko, O.G. 2004), (Borisov, V.S. 2007), (Golovko, V.A. 2001). An artificial neural network is a set of neural elements and connections among them.

Figure 2.

A neuron structure

978-1-5225-5586-5.ch004.f02

Complete Chapter List

Search this Book:
Reset