Particle Swarm Optimization and Intelligence: Advances and Applications
Book Citation Index

Particle Swarm Optimization and Intelligence: Advances and Applications

Konstantinos E. Parsopoulos (University of Ioannina, Greece) and Michael N. Vrahatis (University of Patras, Greece)
Release Date: January, 2010|Copyright: © 2010 |Pages: 328
ISBN13: 9781615206667|ISBN10: 1615206663|EISBN13: 9781615206674|DOI: 10.4018/978-1-61520-666-7


Since its initial development, particle swarm optimization has gained wide recognition due to its ability to provide solutions efficiently, requiring only minimal implementation effort.

Particle Swarm Optimization and Intelligence: Advances and Applications examines modern intelligent optimization algorithms proven as very efficient in applications from various scientific and technological fields. Providing distinguished and unique research, this innovative publication offers a compendium of leading field experiences as well as theoretical analyses and complementary techniques useful to academicians and practitioners.

Topics Covered

The many academic areas covered in this publication include, but are not limited to:

  • Applications in machine learning
  • Applications in operations research
  • Classical optimization algorithms
  • Detection of periodic orbits
  • Fuzzy cognitive maps learning
  • Initialization techniques
  • Inverse scattering problem
  • Neural networks training for bioinformatics
  • Optimization problems
  • Particle swarm optimization
  • Sensitivity analysis and implications
  • Swarm Intelligence
  • Theoretical analyses of particle trajectories

Reviews and Testimonials

This book serves as an essential reference guide of established advances on PSO, as well as a stepping stone for further developments. It can be distinguished by relevant books by its specialization solely on PSO, without however restricting its algorithmic material to the narrow personal achievements and considerations of the authors. Thus, it can be valuable to a wide scientific audience of both novice and expert researchers.

– Konstantinos E. Parsopoulos, University of Ioannina, Greece; Michael N. Vrahatis, University of Patras, Greece

Table of Contents and List of Contributors

Search this Book:


Optimization is the procedure of detecting attributes, configurations or parameters of a system, to produce desirable responses. For example, in structural engineering, one is interested in detecting the best possible design to produce a safe and economic structure that adheres to specific mechanical engineering rules. Similarly, in computer science, one is interested in designing high-performance computer systems at the lowest cost, while, in operations research, corporations struggle to identify the best possible configuration of their production lines to increase their operational flexibility and efficiency. Numerous other systems (including human) can be mentioned, where there is a need or desire for explicit or implicit improvement.

All these problems can be scientifically resolved through modeling and optimization. Modeling offers a translation of the original (physical, engineering, economic, etc.) problem to a mathematical structure that can be handled through algorithmic optimization procedures. The model is responsible for the proper representation of all key features of the original system and its accurate simulation. Concurrently, it offers a mathematical means of identifying and modifying the system’s properties to produce the most desirable outcome without requiring its actual construction, thereby saving time and cost.

The produced models are usually formulated as functions, called objective functions, in one or several variables that correspond to adaptable parameters of the system. The model is built in such a way that, based on the particular optimality criteria per case, the most desirable system configurations correspond to the extremal values of the objective function. Thus, the original system optimization problem is transformed to an equivalent function minimization or maximization problem. The difficulty in solving this problem is heavily dependent on the form and mathematical properties of the objective function.

It is possible that a solution of the optimization problem can be achieved through an analytic approach that involves minimum effort. Unfortunately, this case is rather an exception. In most problems, complicated systems are modeled with complicated multi-dimensional functions that cannot be easily addressed. In such cases, algorithmic procedures that take full advantage of modern computer systems can be implemented to solve the underlying optimization problems numerically. Of course, only approximations of the original solutions can be obtained under this scope. Thus, computation accuracy, time criticality, and implementation effort become important aspects of the numerical optimization procedure.

To date, a multitude of algorithms that exploit favorable mathematical properties of the objective function, such as differentiability and Lipschitz continuity, have been developed. These approaches use first-order and second-order derivatives and achieve high convergence rates. However, the necessary assumptions for their application are not usually met in practice. Indeed, the blossoming of technological research and engineering has introduced a plethora of optimization problems with minimum available information regarding their form and inherent attributes. Typical properties of such problems are the existence of discontinuities, the lack of analytical representation of the objective function, and noise dissemination.

In these circumstances, the applicability and efficiency of classical optimization algorithms are questionable, giving rise to the need for the development of different optimization methods. Early attempts towards this direction were focused on stochastic algorithms that employ only function values. Pure random search is the most trivial approach, although its performance is rapidly degenerating with problem dimension and complexity, since it does not exploit information gained in previous steps of the algorithm. On the other hand, combinations of random and classical algorithms have offered better results; nevertheless, the necessity for strong mathematical assumptions on the objective function was still inevitable.

However, researchers were gradually realizing that several systems observed in nature were able to cope efficiently with similar optimization problems. Thus, the trend to study and incorporate models of natural procedures in optimization algorithms gradually gained ground, becoming an appealing alternative. Early approaches, such as simulated annealing, offered the potential of solving problems that were laborious for other algorithms. However, for a number of years, they remained in the margin of relative literature, due to their limited theoretical developments at that time.

At the same time, a new type of algorithms was slowly but steadily emerging. The inspiration behind their development stemmed from the study of adaptation mechanisms in natural systems, such as DNA. The underlying operations that support evolutionary mechanisms according to the Darwinian biological theory were modeled and used to evolve problem solutions, based on user-defined optimality criteria. Although these optimization approaches were initially supported by limited theoretical analyses, their promising results on complex problems previously considered as intractable offered a boost to research, especially in the engineering community. Research groups in the USA and Europe attained to refine early variants of these algorithms, introducing a set of efficient approaches under the general name of evolutionary algorithms.

Theoretical studies were soon conducted, rendering these algorithms promising alternatives in cases where classical approaches were not applicable. The new type of algorithms concentrated all desirable optimization features as well as novel concepts. For example, a search was not performed by one but rather a population of interacting search agents without central control. Stochasticity, communication, information exchange, and adaptation became part of their operation, while the requirements on objective function were restricted to the least possible, namely the ability to perform function evaluations.

The success recognized by evolutionary approaches sparked off research all over the world. As a result, in the mid-90’s a new category of algorithms appeared. Instead of modeling evolutionary procedures in microscopic (DNA) level, these methods model populations in a macroscopic level, i.e., in terms of social structures and aggregating behaviors. Once again, nature was offering inspiration and motivation to scientists. Hierarchically organized societies of simple organisms, such as ants, bees, and fish, with a very limited range of individual responses, exhibit fascinating behaviors with identifiable traits of intelligence as a whole. The lack of a central tuning and control mechanism in such systems has triggered scientific curiosity. Simplified models of these systems were developed and studied through simulations. Their dynamics were approximated by mathematical models similar to those used in particle physics, while probability theory and stochastic processes offered a solid theoretical background for the development of a new category of algorithms under the name of swarm intelligence.

Particle swarm optimization (PSO) belongs to this category and constitutes the core subject of the book at hand. Its early precursors were simulators of social behavior that implemented rules such as nearest-neighbor velocity matching and acceleration by distance, to produce swarming behavior in groups of simple agents. As soon as the potential of these models to serve as optimization algorithms was recognized, they were refined, resulting in the first version of PSO, which was published in 1995 (Eberhart & Kennedy, 1995; Kennedy & Eberhart, 1995).

Since its development, PSO has gained wide recognition due to its ability to provide solutions efficiently, requiring only minimal implementation effort. This is reflected in Fig. 1, which illustrates the number of journal papers with the term “particle swarm” in their titles published by three major publishers, namely Elsevier, Springer, and IEEE, during the years 2000-2008. Also, the potential of PSO for straightforward parallelization, as well as its plasticity, i.e., the ability to adapt easily its components and operators to assume a desired form implied by the problem at hand, has placed PSO in a salient position among intelligent optimization algorithms.

The authors of the book at hand have contributed a remarkable amount of work on PSO and gained in-depth experience on its behavior and handling of different problem types. This book constitutes an attempt to present the most recent and established developments of PSO within a unified framework and share their long experience, as described in the following section, with the reader.


This book is not about numerical optimization or evolutionary and swarm intelligence algorithms in general. It does not aim at analyzing general optimization procedures and techniques or demonstrating the application of evolutionary algorithms. The book at hand is completely devoted to the presentation of PSO. Its main objectives are to:

  • Provide a unified presentation of the most established PSO variants, from early precursors to concurrent state-of-the-art approaches.
  • Provide a rough sketch of the established theoretical analyses and their impact on the established variants of the algorithm.
  • Provide complementary techniques that enhance its performance on particular problem types.
  • Illustrate its workings on a plethora of different problem types, providing guidelines and experimental settings along with representative results that equip the reader with the necessary background and offer a starting point for further investigation in similar problems.

All these objectives were meant to be achieved using the simplest possible notation and descriptions to render non-expert readers capable of taking full advantage of the presented material.

In order to achieve these ambitious goals, the authors relied on their accumulated experience of more than ten years working on PSO. The majority of the presented material is based on their own work, which corresponds to more than 60 papers in international refereed scientific journals, book chapters, edited volumes, and conference proceedings, and more than a thousand citations from other researchers. A wide variety of applications is presented together with details on published results. Also, pseudocode is provided for the presented methodologies and procedures when applicable, while special mathematical manipulations and transformations are analyzed whenever needed. Besides that, many recent developments of other researchers are reported, always striving to keep the presentation simple and understandable. The provided material and references can serve as an Ariadne’s thread for further acquisition and development of more sophisticated approaches.

We tried to avoid the pitfall of rendering this book another literature review on PSO. Nevertheless, there are excellent survey papers for this purpose (Al Rashidi & El-Hawary, in press; Banks et al., 2007, 2008; Hendtlass & Randall, 2001; Parsopoulos & Vrahatis, 2002; Reyes-Sierra & Coello Coello, 2006; Yang & Li, 2004). Instead, only the most important developments with potential for further extension and improvement are analyzed. We did not aim at deluging the reader with a huge number of minor developments or reproductions of established sound results, but rather offer the most essential concepts and considerations as the amalgam of our experience and personal viewpoint in the ongoing progress of PSO research all these years.

The target audience of the book embraces undergraduate and graduate students with a special interest in PSO and relative approaches, as well as researchers and scientists that employ heuristic algorithms for problem solving in science and engineering. The level of mathematics was intentionally kept low to make descriptions comprehensible even to a multidisciplinary (non-expert) audience. Thus, elementary calculus is adequate for the comprehension of most of the presented algorithms in the first part of the book, while essential specialized knowledge on machine learning, dynamical systems, and operations research may be useful in specific chapters of the second part.

This book can also serve as an essential reference guide of established advances on PSO, as well as a stepping stone for further developments. It can be distinguished by relevant books by its specialization solely on PSO, without however restricting its algorithmic material to the narrow personal achievements and considerations of the authors. Thus, it can be valuable to a wide scientific audience of both novice and expert researchers.


The book is divided in two sections. Section One consists of Chapters One to Five and presents basic developments in PSO along with theoretical derivations, state-of-the-art variants, and performance-enhancing techniques. Section Two consists of Chapters Six to Twelve, where various applications of PSO are presented and trends for future research are reported. The book also has two appendices. Appendix A contains descriptions of the test problems used throughout the text, especially in Part Two. Appendix B contains an indicative simple implementation of PSO in Matlab©, as well as further web resources on PSO. A brief description of each chapter is provided in the following paragraphs.

Chapter One introduces the reader to basic concepts of global optimization, evolutionary computation, and swarm intelligence, outlines the necessity for solving optimization problems and identifies various problem types. Also, a rough classification of established optimization algorithms is provided, followed by the historical development of evolutionary computation. The three fundamental evolutionary approaches, namely genetic algorithms, evolutionary programming, and evolution strategies are briefly presented, along with their basic features and operations. Then, swarm intelligence is presented followed by short descriptions of its three main algorithms, namely ant colony optimization, stochastic diffusion search, and particle swarm optimization. Finally, reference is made to the no-free-lunch theorem to justify the necessity for further development of intelligent optimization algorithms.

Chapter Two is devoted to the presentation of particle swarm optimization (PSO). The description begins with the main inspiration source that led to the development of its early precursors. Severe deficiencies of these approaches are pointed out and addressed by introducing new concepts, such as inertia weight, velocity clamping, and the concept of neighborhood. Finally, the reader is brought to the present day by exposing the standard contemporary developments, which are considered the state-of-the-art PSO variants nowadays.

Chapter Three briefly exposes the fundamental theoretical derivations and issues of PSO. Emphasis is given to developments that offered new insight in configuring and tuning its parameters. The chapter begins with a discussion on initialization procedures, followed by the first attempts to investigate particle trajectories. These studies opened the way for the stability analysis of PSO, which resulted in contemporary sophisticated variants and rules for parameter setting. We then present a useful technique for the optimal tuning of PSO on specific problems, based on computational statistics. The workings of this technique are illustrated in detail in two problems. The chapter closes with a short discussion on the most common termination conditions.

Chapter Four presents established and recently proposed PSO variants. The presented methods were selected from the extensive PSO literature according to various criteria, such as sophisticated inspiration source, close relationship to the standard PSO form, wide applicability in problems of different types, satisfactory performance and theoretical properties, number of reported applications, and potential for further development and improvement. Thus, the unified PSO, memetic PSO, composite PSO, vector evaluated PSO, guaranteed convergence PSO, cooperative PSO, niching PSO, TRIBES, and quantum PSO are described and their fundamental concepts are exposed.

Chapter Five closes the first section of the book by presenting performance-enhancing techniques. These techniques consist of transformations of either the objective function or the problem variables, enabling PSO to alleviate local minimizers, detect multiple global minimizers, handle constraints, and solve integer programming problems. The chapter begins with a short discussion of the filled functions approach, followed by the stretching technique as an alternative for alleviating local minimizers. Next, deflection and repulsion techniques are presented as means for detecting multiple minimizers with PSO. The penalty function approach for handling constraints is discussed and two recently proposed approaches are reported. The chapter closes with the description of two rounding schemes that enable the real-valued PSO to solve integer programming problems. The techniques are thoroughly described and, wherever possible, graphically illustrated.

Chapter Six focuses on the application of PSO on machine learning problems. Two representative cases, namely the training of artificial neural networks and learning in fuzzy cognitive maps, are first defined within a general framework, followed by illustrative examples that familiarize the reader with the main procedures and nominate possible obstacles of the underlying optimization procedure.

Chapter Seven presents the application of PSO in dynamical systems and more specifically in the detection of periodic orbits. The transformation of the original fixed-point problem to the corresponding optimization problem is analyzed and indicative examples on widely used nonlinear mappings are reported in the first part of the chapter. The second part thoroughly describes an important application on the detection of periodic orbits in 3-dimensional galactic potentials, illustrating the ability of PSO to detect previously unknown solutions.

Chapter Eight consists of three representative applications of PSO in operations research. Similarly to previous chapters, attention is focused on the presentation of essential aspects of the applications rather than reviewing the existing literature. Thus, we present the formulation of the optimization problem from the original one, along with the efficient treatment of special problem requirements that cannot be handled directly by PSO. Applications from the fields of scheduling, inventory optimization, and game theory are given, accompanied by recently published results to provide a flavor of PSO’s efficiency per case.

Chapter Nine presents two interesting applications of PSO in bioinformatics and medical informatics. The first one consists of the adaptation of probabilistic neural network models for medical classification tasks. The second application considers PSO variants for tackling two magnetoencephalography problems, namely source localization and refinement of approximation models. The main points where PSO interferes with the employed computational models are analyzed, and details are provided regarding the formulation of the corresponding optimization problems and their experimental settings. Indicative results are also reported as representative performance samples.

Chapter Ten discusses the workings of PSO on two intimately related research fields with special focus on real world applications, namely noisy and dynamic environments. Noise simulation schemes are presented and experimental results on benchmark problems are reported. Also, the application of PSO in a simulated real world problem, namely the particle identification by light scattering, is presented. Moreover, a hybrid scheme that incorporates PSO in particle filtering methods to estimate system states online is analyzed and representative experimental results are provided. Finally, the combination of noisy and continuously changing environments is shortly discussed, providing illustrative graphical representations of performance for different PSO variants.

Chapter Eleven essentially closes the second section of the book by presenting applications of PSO in three very interesting problem types, namely multiobjective, constrained, and minimax optimization problems. The largest part of the chapter is devoted to the multiobjective case, which is supported by an extensive bibliography with a rich assortment of PSO approaches developed to date. Different algorithm types are presented and briefly discussed, insisting on the most influential approaches for the specific problem types.

Chapter Twelve closes the book by providing current trends and future directions in PSO research. This information can be beneficial to new researchers with an interest in conducting PSO-related research, since it enumerates open problems and active research topics.

Appendix A contains all test problems employed throughout the text. Thus, widely used unconstrained optimization problems, nonlinear mappings, inventory optimization problems, game theory problems, data sets for classification tasks in bioinformatics, multiobjective problems, constrained benchmark and engineering design problems, as well as minimax problems are reported. Citations for each problem are also provided.

Finally, Appendix B contains an indicative simple implementation of the PSO algorithm in Matlab©, as well as references to web resources for further information on developments and implementations of PSO.

Each chapter is written in a self-contained manner, although several references to the first section of the book are made in chapters devoted to applications. We hope that readers will find our approach interesting and help us improve it by providing their comments, considerations, and suggestions.

Author(s)/Editor(s) Biography

Konstantinos E. Parsopoulos received his Diploma in Mathematics, with specialty in Computational Mathematics and Informatics, in 1998 from the Department of Mathematics, University of Patras, Greece, and his PhD degree in Intelligent Computational Optimization in 2005 from the Departments of Mathematics and Computer Engineering & Informatics, University of Patras, Greece. He served as a Lecturer with the Department of Mathematics, University of Patras, Greece, from May 2008 to September 2009. He is currently an Assistant Professor with the Department of Computer Science, University of Ioannina, Greece. His research is focused on Intelligent Computational Optimization algorithms with an emphasis on Swarm Intelligence and Evolutionary Computation approaches. His work consists of 16 papers in refereed international scientific journals, and 47 papers in books, edited volumes and conference proceedings, while it has received more than 990 citations from other researchers. He has been a visiting researcher at the Department of Computer Science, University of Dortmund, Germany (2001), as well as at the Institut de Recherche en Informatique et en Automatique (INRIA), Sophia-Antipolis, France (2003 and 2006). He has served as a member of the editorial board of 2 international scientific journals, as well as of the technical and program committee in 13 international conferences. Also, he has served as reviewer for 24 scientific journals, as well as for the Portuguese Foundation for Science and Technology (FCT). He has participated in national and international research projects, and he is a member of ACM and IEEE since 2005 and 1999, respectively.
Michael N. Vrahatis received his PhD in Computational Mathematics (1982) from the University of Patras, Greece. His work includes topological degree theory, systems of nonlinear equations, numerical and intelligent optimisation, data mining and unsupervised clustering, cryptography and cryptanalysis as well as computational and swarm intelligence. He is a Professor of Computational Mathematics in the University of Patras, since 2000 and serves as Director of the Computational Intelligence Laboratory of the same Department (since 2004). Several of his students serve as Lecturers, Assistant Professors, Associate Professors or Full Professors in Greece, England and Japan. He was a visiting research fellow at the Department of Mathematics, Cornell University, Ithaca, NY, USA (1987–1988) and a visiting professor to the INFN (Istituto Nazionale di Fisica Nucleare), Bologna, Italy (1992, 1994, and 1998); the Department of Computer Science, Katholieke Universiteit Leuven, Belgium (1999); the Department of Ocean Engineering, Design Laboratory, MIT, Cambridge, MA, USA (2000); and the Collaborative Research Center “Computational Intelligence” (SFB 531) at the Department of Computer Science, University of Dortmund, Germany (2001). He was a visiting researcher at CERN (European Organization of Nuclear Research), Geneva, Switzerland (1992) and at INRIA (Institut National de Recherche en Informatique et en Automatique), Sophia-Antipolis, France (1998, 2003, 2004 and 2006). He has participated in the organisation of over 60 conferences serving at several positions, and participated in over 200 conferences, congresses and advanced schools as a speaker (participant as well as invited) or observer. He is/was a member of the editorial board of eight international journals. His work consists of over 330 publications (122 papers in refereed international scientific journals, and 209 papers in books, edited volumes and conference proceedings) that has been cited by researchers over 3500 times (co-authors citations excluded).