Developmental Swarm Intelligence: Developmental Learning Perspective of Swarm Intelligence Algorithms

Developmental Swarm Intelligence: Developmental Learning Perspective of Swarm Intelligence Algorithms

Yuhui Shi
Copyright: © 2014 |Pages: 19
DOI: 10.4018/ijsir.2014010102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper, the necessity of having developmental learning embedded in a swarm intelligence algorithm is confirmed by briefly considering brain evolution, brain development, brainstorming process, etc. Several swarm intelligence algorithms are looked at from developmental learning perspective. Finally, a framework of a developmental swarm intelligence algorithm is given to help understand developmental swarm intelligence algorithms, and to guide to design and/or implement any new developmental swarm intelligence algorithm and/or any developmental evolutionary algorithm.
Article Preview
Top

Introduction

Can a swarm intelligence algorithm develop its learning capacity that can better solve an optimization problem which is unknown at the algorithm’s design or implementation time? In the swarm intelligence research field, we are facing to solve different types of optimization problems under different environments. For example, there are single objective optimization problems, multi-objective optimization problems, constrained optimization problems, combinational optimization problems, etc.; there are optimization problems under fixed environment, dynamically changing environment, unknown environment, etc. As claimed in no-free-lunch theory (Wolpert & Macready, 1997), there is no single algorithm that will work the best for all different problems. That is to say, one algorithm can be better for one kind of problems, but may be worse for other kinds of problems. It usually is not an easy, if not impossible, job to find the best algorithm for solving one kind of problems, especially when we have no prior knowledge about the problem and the environment the problem is in. An ideal optimization algorithm should have the ability to change itself to have the suitable capacity to learn and solve the problem to be solved under its own environment, that is to say, it should be able to develop its own learning capacity or learning potential which has special connection with the problem and its environment, therefore, to enable the algorithm to better learn and solve the problem.

Researches on optimization algorithms have been around for many years because many real-world problems can eventually be represented or modelled as optimization problems which then require optimization algorithms to solve or to find solutions. Traditionally, hill-climbing algorithms are commonly used to solve optimization problems which usually require optimization problems to be able to be represented by mathematic functions which are further required to be continuous and differentiable. One commonly used hill-climbing algorithm is the steepest descent approach (Battiti, 1992). For an optimization problem, if it can be represented by a continuous and differentiable convex function, then the steepest descent approach can always find its global optimal solution; if the problem can’t be represented by a convex function, then the steepest descent approach will find a solution which may or may not be a global optimal solution (point), and in general, it will only find a local optimal solution which may or may not be a good enough solution. What kind of local optimal solution it may find depends on the initial starting solution. Therefore, hill-climbing algorithms are usually called local search algorithms (Hoos & Stutzle, 2005). A hill-climbing algorithm has the capability to find an optimal solution, but whether it will find or not depends on its initial starting solution. To overcome this issue of failing into local optimum, some techniques have been added to modify hill-climbing algorithms to make them to have more potential to avoid local optimum and eventually find at least a good enough solution. These modified algorithms include stochastic gradient descent algorithms (Gardner, 1984), random walk algorithms (Grady, 2006), and simulated annealing algorithms (Granville et al., 1994).

In order to overcome or remove the limitation that an optimization problem needs to be represented or modelled by a continuous and differentiable function and to improve its possibility of finding better solutions, population-based heuristic algorithms were proposed and studied, which can be applied to solve optimization problems which are not required to be represented by continuous and differentiable function, instead the requirement is lessened to be that any solution to the problem can be evaluated. Commonly used population-based heuristic algorithms include genetic algorithms (Holland, 1975), evolutionary programming (Fogel, 1962), genetic programming (Koza, 1992), evolution strategy (Rechenberg, 1973), and swarm intelligence algorithms (Eberhart & Shi, 2007). So far, most researches on these population-based heuristic algorithms focus on their search capability or learning capability.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 3 Issues (2023)
Volume 13: 4 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing