Optimization Algorithms

Optimization Algorithms

Copyright: © 2014 |Pages: 18
DOI: 10.4018/978-1-4666-6146-2.ch013
OnDemand PDF Download:
List Price: $37.50


In this chapter, the basic definition of Genetic Algorithm (GA) and some of the main operations applied in GA are explained. In addition, Swarm Intelligence (SI) is briefly explained as the new branch of intelligent behavior of nature phenomena. Although PSO has been explained in past chapters, this chapter explains PSO in detail and an example of the way PSO works is provided for better understanding. Some of the differences of Particle Swarm Optimization (PSO) and GA are provided and readers will learn how to use GA and PSO for training the neural network. The experiments and contents in this chapter are from the study by Nuzly (2006) in her thesis entitled “Particle Swarm Optimization for Neural Network Learning Enhancement”.
Chapter Preview

2 Problem Background

In the past parts we explained the BP and MLP networks. However, the major disadvantages of BP are their slow rate of convergence (Zweiri, Whidborne, & Sceviratne, 2003) and being trapped at the local minima. This is because of the hill climbing technique in the BP learning which cause to trapping in local minima where every small chang in synaptic weight increases the cost function. But somewhere else in the weight space there exist another set of synaptic weight for which the cost function is smaller than the local minimum in which the network is stuck. Terminating the learning process because of trapping in the local minima is not desirable. Many researchers in the field of NN have worked on solving the problem of slow convergence rate. Many powerful optimization algorithms have been devised, most of which have been based on simple gradient descent algorithm as explain by Bishop (1995) such as conjugate gradient decent, scaled conjugate gradient descent, quasi-Newton BFGS and Levenberg-Marquardt methods. The classical solutions are by improving the program codes and upgrading the machine’s hardware. Lately, latest solutions proposed by NN researcher try to guide the learning so that the converge speed become faster. The guidelines to select better functions, learning rate, momentum rate and activation functions. Genetic Algorithm (GA) is one of the algorithms proposed to determine the learning rate and momentum rate and will produce a set of weight that can be used for testing related data. Table 1 briefly described the finding from several researchers in order to increase learning speed (Fnaiech et al., 2002), avoid from trapped into local minima (Wyeth, Buskey, & Roberts, 2000) and better classification result.

Complete Chapter List

Search this Book: