Taguchi-Particle Swarm Optimization for Numerical Optimization

Taguchi-Particle Swarm Optimization for Numerical Optimization

T. O. Ting, H. C. Ting, T. S. Lee
Copyright: © 2010 |Pages: 16
DOI: 10.4018/jsir.2010040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this work, a hybrid Taguchi-Particle Swarm Optimization (TPSO) is proposed to solve global numerical optimization problems with continuous and discrete variables. This hybrid algorithm combines the well-known Particle Swarm Optimization Algorithm with the established Taguchi method, which has been an important tool for robust design. This paper presents the improvements obtained despite the simplicity of the hybridization process. The Taguchi method is run only once in every PSO iteration and therefore does not give significant impact in terms of computational cost. The method creates a more diversified population, which also contributes to the success of avoiding premature convergence. The proposed method is effectively applied to solve 13 benchmark problems. This study’s results show drastic improvements in comparison with the standard PSO algorithm involving continuous and discrete variables on high dimensional benchmark functions.
Article Preview
Top

Introduction

The innovative paradigm of behavioral modeling based on the concept of swarm intelligence was proposed by Kennedy and Eberhart (Kennedy & Eberhart, 1995). The particle swarm optimization (PSO) algorithm consists of members sharing information among them, a fact that leads to increased efficiency. A myriad of variants of particle swarm optimizers have been developed with numerous acronyms given by respective researchers. A survey of the variants of PSO discloses some common strategies adopted to achieve better convergence. There are three groups of strategies in these variants. The first strategy is to divide the population into some or many smaller sizes. PSO variants of such a category are:

  • i.

    Co-evolutionary PSO (Co-PSO) (Shi & Krohling, 2002),

  • ii.

    Cooperative PSO (CPSO) (Bergh & Engelbrecht, 2004),

  • iii.

    Concurrent PSO (CONPSO) (Baskar & Suganthan, 2004),

  • iv.

    Dynamic Multi-Swarm PSO (DMS-PSO) (Liang & Suganthan, 2005),

  • v.

    Multi-population Cooperative PSO (MCPSO) (Niu, Zhu, & He, 2005) and

  • vi.

    Species-based PSO (SPSO) (Parrot & Li, 2006)

  • vii.

    NichePSO (Brits, Engelbrecht, & Bergh, 2007),

The second strategy is to design the interaction between particles or sub-swarms. Instead of the normal social learning from local best particle, other leaders from other sub-swarms have the right to influence the optimality search of another sub group. Hierarchical PSO (H-PSO) (Janson & Middendorf, 2005) and MCPSO are typical examples of this approach where hierarchy or master-slave relationship can be observed. In Co-PSO, CONPSO and Comprehensive Learning PSO (CLPSO) (Liang, Qin, Suganthan, & Baskar, n.d.), different social learning are experienced by the particles.

The third strategy is to gradually transform local neighborhood at the beginning of the search into a global one at the end. Unified PSO (UPSO) (Parsopoulos & Vrahatis, 2007) seems to have applied this approach successfully. However, Adaptive Hierarchical PSO (AH-PSO) happens to do exactly the opposite. Other variants such as the Levy PSO (Richer & Blackwell, 2006)and PSO with stretching function (Parsopoulos & Vrahatis, 2004) manipulate the random distribution and objective functions respectively, instead of the neighborhood topology.

Many of the improved PSO algorithms incorporate some improvement strategies in the algorithms itself (Bergh & Engelbrecht, 2004) (Yao, Liu, & Lin, 1999) (Leung & Wang, 2001) (Vasconcelos, Ramirez, Takahashi, & Saldanha, 2001) (Clerc & Kennedy, 2002)while some improved algorithms are the result of hybridizing two algorithms (Krink & Lovbjerg, 2002) (Tsai, Liu, & Chou, 2004). However, the common drawback of hybridization has been the introduction of complexity in the algorithm thereby increases the computational cost. The unique feature of PSO compared to other algorithms has been signified by its fast convergence capability. However, the drawback is that it does not guarantee global optimum, and in most of the time premature convergence occurs.

To overcome the weakness of PSO above, we incorporate the concept of Taguchi method into PSO to avoid premature convergence while maintaining its fast convergence characteristic. The Taguchi method (Ross, 1989)is an established approach for robust design, applying the idea from statistical experiment design for evaluating and improvements in products, processes and equipment. The key to Taguchi concept is to improve the quality of a product by minimizing the effect of the causes of variation without the elimination of the relevant causes. The two major tools used in the Taguchi method are:

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 3 Issues (2023)
Volume 13: 4 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing