Vector Optimization of Neural Network Classifiers

Vector Optimization of Neural Network Classifiers

DOI: 10.4018/978-1-7998-0414-7.ch090
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The problem content and statement of the problem of vector optimization of neural network classifiers is given. Method of solution is proposed. Vector optimization of a neural network texts classifier method is proposed.
Chapter Preview
Top

Introduction

This Chapter deals with a nonlinear compromises scheme in the problem of multicriteria optimization of neural network classifiers. An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system.

It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well.

Computers are great at solving algorithmic and math problems, but often the world can't easily be defined with a mathematical algorithm. Facial recognition and language processing are a couple of examples of problems that can't easily be quantified into an algorithm, however these tasks are trivial to humans.

The key to Artificial Neural Networks is that their design enables them to process information in a similar way to our own biological brains, by drawing inspiration from how our own nervous system functions. This makes them useful tools for solving problems like facial recognition, which our biological brains can do easily.

Our brains use extremely large interconnected networks of neurons to process information and model the world we live in. Electrical inputs are passed through this network of neurons which result in an output being produced. There are about one hundred billion (100,000,000,000) neurons inside the human brain each with about one thousand synaptic connections. It's effectively the way in which these synapses are wired that give our brains the ability to process information the way they do. The reader can find more information about the ANN, for example, in https://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html.

An important kind of artificial neural networks are neural network classifiers. They are used for technical and medical diagnostics, classification of various kinds of information sources, and so on. In general enough case a structure of the q- layer neural network classifier with direct links is presented on Figure 1.

Figure 1.

The neural network classifier

978-1-7998-0414-7.ch090.f01

Here

  • x1,x2,…,xn – the attributes of the classification object constituting the input vector;

  • 978-1-7998-0414-7.ch090.m01 – the number of neural elements in the receptor layer;

  • p1,p2,…,pq – the number of neurons in each of hidden (processing) layers;

  • pq+1=m – the number of neurons in the output layer (the number of classes);

  • 978-1-7998-0414-7.ch090.m02 – output vector of the neural network that determines the classification object belonging to one of the m classes;

  • w1,w2,…,wq, wq+1 – vectors of the synaptic weights of the neural network.

Complete Chapter List

Search this Book:
Reset